php/dos : How do you parse a regedit export file? - php

My objective is to look for Company key-value in the registry hive and then pull the corresponding Guid and other keys and values following it. So I figured i would run the regedit export command and then parse the file with php for the keys I need.
So after running the dos batch command
>regedit /E "output.txt" "HKLM\System....\Company1"
The output textfile seems to be in some kind of UNICODE format which isn't regex friendly. I'm using php to parse the file and pull the keys.
Here is the php code i'm using to parse the file
<?php
$regfile = "output.txt";
$handle = fopen ("c:\\\\" . $regfile,"r");
//echo "handle: " . $file . "<br>";
$row = 1;
while ((($data = fgets($handle, 1024)) !== FALSE) ) {
$num = count($data);
echo "$num fields in line $row: \n";
$reg_section = $data;
//$reg_section = "[HKEY_LOCAL_MACHINE\SOFTWARE\TECHNOLOGIES\MEDIUS\CONFIG MANAGER\SYSTEM\COMPANIES\RECORD11]";
$pattern = "/^(\[HKEY_LOCAL_MACHINE\\\SOFTWARE\\\TECHNOLOGIES\\\MEDIUS\\\CONFIG MANAGER\\\SYSTEM\\\COMPANIES\\\RECORD(\d+)\])$/";
if ( preg_match($pattern, $reg_section )) {
echo "<font color=red>Found</font><br>";
} else {
echo "not found<br>";
echo $data . "<br>";
}
$row++;
} //end while
fclose($handle);
?>
and the output looks like this....
1 fields in line 1: not found
ÿþW�i�n�d�o�w�s� �R�e�g�i�s�t�r�y�
�E�d�i�t�o�r� �V�e�r�s�i�o�n�
�5�.�0�0� � 1 fields in line 2: not
found
1 fields in line 3: not found
[�H�K�E�Y��L�O�C�A�L��M�A�C�H�I�N�E�\�S�O�F�T�W�A�R�E�\�I�N�T�E�R�S�T�A�R�
�T�E�C�H�N�O�L�O�G�I�E�S�\�X�M�E�D�I�U�S�\�C�O�N�F�I�G�
�M�A�N�A�G�E�R�\�S�Y�S�T�E�M�\�C�O�M�P�A�N�I�E�S�]�
� 1 fields in line 4: not found
"�N�e�x�t� �R�e�c�o�r�d�
�I�D�"�=�"�4�1�"� � 1 fields in line
5: not found
Any ideas how to approach this?
thanks in advance

Try adding /A to REGEDIT command like this to produce compatible output:
REGEDIT /E /A "output.txt" "HKEY_LOCAL_MACHINE\System....\Company1"

I know there is a Perl library for this:
Parse::Win32Registry
Making a PHP class from it shouldn't be too difficult though. There's also a PECL extension for PHP that will parse Perl code:
http://devzone.zend.com/node/view/id/1712

Regular expressions work fine with unicode. Are you getting a specific error message?

From Windows XP the Regedit export is Unicode and therefore 2 bytes. You'll see this if you open up the export in notepad. I'm not sure older versions of php are able to handle unicode files.
Is there no way you can read the specific key you need? Through another tool etc. That would be a much more straighforward approach.

Related

Trying to read a csv file with thailand's character in it using php but after reading it the characters are changed to some unidentified characters

I have a csv file that have data like this:
Sub District District
A Hi อาฮี Tha Li District ท่าลี่
A Phon อาโพน Buachet District บัวเชด
when I tried to read it using php code by following this SO question:
<?php
//set internal encoding to utf8
mb_internal_encoding('utf8');
$fileContent = file_get_contents('thai_unicode.csv');
//convert content from unicode to utf
$fileContentUtf = mb_convert_encoding($fileContent, 'utf8', 'unicode');
echo "parse utf8 string:\n";
var_dump(str_getcsv($fileContentUtf, ';'));
But it didn't work at all. Someone please let me know what I am doing wrong here.
Thanks in advance.
There are 2 issues with your code:
Your code applies str_getcsv to whole file contents (instead of individual line)
Your code example is using delimiter ";" but there is no such symbol in your input file.
Your data is in either fixed field length format (which is actually not a csv file) or in tab delimited csv file format.
If it is tab delimited file format then you can use 2 ways to read your file:
$lines = file('thai_unicode.csv');
foreach($lines as $line){
$data = str_getcsv($line,"\t");
echo "sub_district: ". $data[0].", district: ".$data[1]."\n";
}
or
$f = fopen('thai_unicode.csv',"r");
while($data = fgetcsv($f,0,"\t")){
echo "sub_district: ". $data[0].", district: ".$data[1]."\n";
}
fclose($f);
And in case you have fixed length fields data format you need to split each line yourself because csv related php function are not suitable for this purpose.
So you will end up with something like this:
$f = fopen('thai_unicode.csv',"r");
while($line = fgets($f)){
$sub_district = mb_substr($line,0,20);
$district = mb_substr($line,20);
echo "sub_district: $sub_district, district: $district\n";
}
fclose($f);

Search a string in a CSV file , output all the lines which contain it as table - PHP

I am new to the world of coding and learning PHP these days. For almost one week of research on this issue , I have almost given up on this issue. Hope to get some good insight on it from the experts.
Problem :- I have a CSV file which has information about servers. for Example :
ClientId,ProductName,Server,ServerRole,Webserver,DatabaseName
001,abc,Server1,Web,Webserver1,,
001,abc,Server2,Dabatase,,Database1
001,abc,Server3,Application,,,
002,abc,Server4,Web,Webserver2,,
002,abc,Server5,Database,,Database2,
I created a HTML page which has a simple html form which takes a server name as an input and invokes the commands written in a page called "search.php". I am able to save the user input from index form to a variable fine . But here is the real problem. I want to search that variable against this CSV file , find the client name ( column 1) related to that server ( which should be matched from column 3 ) and then , print all the lines for that client. For e.g. if I input "Server3" , I should get the first three lines as output in a table form.
I have used fgetcsv() , fgets() etc. but I dont seem to crack this. So far , the closest I have reached is printing all the lines which contain the input text (and that too not in a table form). Any help to resolve my problem would be much appreciated.
Here is my code so far:
<?php
$name = $_POST["search"];
echo "You have searched for the server <b>$name</b>";
$output = "";
$fp = fopen("D:\VMware\DSRM\Servers\Servers.csv", "r");
// Read file
$txt = fgets($fp);
while ( !feof( $fp ) ) {
// Search for keyword
if ( stripos( $txt, $name ) !== false ) {
$output .= $txt.'<br />';
}
$txt = fgets($fp);
}
echo $output;
?>
What about regex?
$input_lines = file_get_contents("theCSV");
$server = "Server3";
preg_match_all("/(\d+).*(".$server.")(.*)/", $input_lines, $clientid);
preg_match_all("/(". $clientid[1] .".*)/", $input_lines, $output_array);
Var_dump(output_array[1]);
In theory this should work :-)

PHP fgetc can't read apostrophe from my file

I've been searching for this question, but I think it haven't been asked yet.
I have got a problem with reading via fgetc() from my file. When I need to read the apostrophe, the program replace it with ???, so I am not able to add apostrophe to my array. Here's the code (I cut it - so there's no array adding):
$file = fopen("file.txt", "r");
$read_c;
while(!feof($file)) {
while(ctype_space($read_c = fgetc($file)));
echo $read_c . " ";
}
fclose($file);
Now, when there's an apostrophe in the text in the file
’a’
I get in the terminal:
? ? ? a ? ? ?
The strange thing is, when I put in the code
echo $read_c
Instead of
echo $read_c . " "
The output is given correctly
’a’
Thank you all for you help.

How do I get fgetcsv() in PHP to work with Japanese characters?

I have the following data being generated from a google spreadsheet rss feed.
いきます,go,5
きます,come,5
かえります,"go home, return",5
がっこう,school,5
スーパー,supermarket,5
えき,station,5
ひこうき,airplane,5
Using PHP I can do the following:
$url = 'http://google.com.....etc/etc';
$data = file_get_contents($url);
echo $data; // This prints all Japanese symbols
But if I use:
$url = 'http://google.com.....etc/etc';
$handle = fopen($url);
while($row = fgetcsv($handle)) {
print_r($row); // Outputs [0]=>,[1]=>'go',[2]=>'5', etc, i.e. the Japanese characters are skipped
}
So it appears the Japanese characters are skipped when using either fopen or fgetcsv.
My file is saved as UTF-8, it has the PHP header to set it as UTF-8, and there is a meta tag in the HTML head to mark it as UTF-8. I don't think it's the document it's self because it can display characters through the file_get_contents method.
Thanks
I can't add comment to the answer from Darien
I reproduce the problem, after change a locale the problem was solved.
You must install jp locale on server before trying repeat this.
Ubuntu
Add a new row to the file /var/lib/locales/supported.d/local
ja_JP.UTF-8 UTF-8
And run command
sudo dpkg-reconfigure locales
Or
sudo locale-gen
Debian
Just execute "dpkg-reconfigure locales" and select necesary locales (ja_JP.UTF-8)
I don't know how do it for other systems, try searching by the keywords "locale-gen locale" for your server OS.
In the php file, before open csv file, add this line
setlocale(LC_ALL, 'ja_JP.UTF-8');
This looks like it might be the same as PHP Bug 48507.
Have you tried changing your PHP locale setting prior to running the code and resetting it afterwards?
You might want to consider this library. I remember using it some time back, and it is much nicer than the built-in PHP functions for handling CSV files. がんばって!
May be iconv character encoding help you
http://php.net/manual/en/function.iconv.php
You can do that by hand not using fgetcsv and friends:
<?php
$file = file('http://google.com.....etc/etc');
foreach ($file as $row) {
$row = preg_split('/,(?!(?:[^",]|[^"],[^"])+")/', trim($row));
foreach ($row as $n => $cell) {
$cell = str_replace('\\"', '"', trim($cell, '"'));
echo "$n > $cell\n";
}
}
Alternatively you can opt in for a more fancy closures-savvy way:
<?php
$file = file('http://google.com.....etc/etc');
array_walk($file, function (&$row) {
$row = preg_split('/,(?!(?:[^",]|[^"],[^"])+")/', trim($row));
array_walk($row, function (&$cell) {
$cell = str_replace('\\"', '"', trim($cell, '"'));
});
});
foreach ($file as $row) foreach ($row as $n => $cell) {
echo "$n > $cell\n";
}

Best practice: Import mySQL file in PHP; split queries

I have a situation where I have to update a web site on a shared hosting provider. The site has a CMS. Uploading the CMS's files is pretty straightforward using FTP.
I also have to import a big (relative to the confines of a PHP script) database file (Around 2-3 MB uncompressed). Mysql is closed for access from the outside, so I have to upload a file using FTP, and start a PHP script to import it. Sadly, I do not have access to the mysql command line function so I have to parse and query it using native PHP. I also can't use LOAD DATA INFILE. I also can't use any kind of interactive front-end like phpMyAdmin, it needs to run in an automated fashion. I also can't use mysqli_multi_query().
Does anybody know or have a already coded, simple solution that reliably splits such a file into single queries (there could be multi-line statements) and runs the query. I would like to avoid to start fiddling with it myself due to the many gotchas that I'm likely to come across (How to detect whether a field delimiter is part of the data; how to deal with line breaks in memo fields; and so on). There must be a ready made solution for this.
Here is a memory-friendly function that should be able to split a big file in individual queries without needing to open the whole file at once:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
if (is_file($file) === true)
{
$file = fopen($file, 'r');
if (is_resource($file) === true)
{
$query = array();
while (feof($file) === false)
{
$query[] = fgets($file);
if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
{
$query = trim(implode('', $query));
if (mysql_query($query) === false)
{
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}
else
{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0)
{
ob_end_flush();
}
flush();
}
if (is_string($query) === true)
{
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I tested it on a big phpMyAdmin SQL dump and it worked just fine.
Some test data:
CREATE TABLE IF NOT EXISTS "test" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"name" TEXT,
"description" TEXT
);
BEGIN;
INSERT INTO "test" ("name", "description")
VALUES (";;;", "something for you mind; body; soul");
COMMIT;
UPDATE "test"
SET "name" = "; "
WHERE "id" = 1;
And the respective output:
SUCCESS: CREATE TABLE IF NOT EXISTS "test" ( "id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT, "description" TEXT );
SUCCESS: BEGIN;
SUCCESS: INSERT INTO "test" ("name", "description") VALUES (";;;", "something for you mind; body; soul");
SUCCESS: COMMIT;
SUCCESS: UPDATE "test" SET "name" = "; " WHERE "id" = 1;
Single page PHPMyAdmin - Adminer - Just one PHP script file.
check : http://www.adminer.org/en/
When StackOverflow released their monthly data dump in XML format, I wrote PHP scripts to load it into a MySQL database. I imported about 2.2 gigabytes of XML in a few minutes.
My technique is to prepare() an INSERT statement with parameter placeholders for the column values. Then use XMLReader to loop over the XML elements and execute() my prepared query, plugging in values for the parameters. I chose XMLReader because it's a streaming XML reader; it reads the XML input incrementally instead of requiring to load the whole file into memory.
You could also read a CSV file one line at a time with fgetcsv().
If you're inporting into InnoDB tables, I recommend starting and committing transactions explicitly, to reduce the overhead of autocommit. I commit every 1000 rows, but this is arbitrary.
I'm not going to post the code here (because of StackOverflow's licensing policy), but in pseudocode:
connect to database
open data file
PREPARE parameterizes INSERT statement
begin first transaction
loop, reading lines from data file: {
parse line into individual fields
EXECUTE prepared query, passing data fields as parameters
if ++counter % 1000 == 0,
commit transaction and begin new transaction
}
commit final transaction
Writing this code in PHP is not rocket science, and it runs pretty quickly when one uses prepared statements and explicit transactions. Those features are not available in the outdated mysql PHP extension, but you can use them if you use mysqli or PDO_MySQL.
I also added convenient stuff like error checking, progress reporting, and support for default values when the data file doesn't include one of the fields.
I wrote my code in an abstract PHP class that I subclass for each table I need to load. Each subclass declares the columns it wants to load, and maps them to fields in the XML data file by name (or by position if the data file is CSV).
Can't you install phpMyAdmin, gzip the file (which should make it much smaller) and import it using phpMyAdmin?
EDIT: Well, if you can't use phpMyAdmin, you can use the code from phpMyAdmin. I'm not sure about this particular part, but it's generaly nicely structured.
Export
The first step is getting the input in a sane format for parsing when you export it. From your question
it appears that you have control over the exporting of this data, but not the importing.
~: mysqldump test --opt --skip-extended-insert | grep -v '^--' | grep . > test.sql
This dumps the test database excluding all comment lines and blank lines into test.sql. It also disables
extended inserts, meaning there is one INSERT statement per line. This will help limit the memory usage
during the import, but at a cost of import speed.
Import
The import script is as simple as this:
<?php
$mysqli = new mysqli('localhost', 'hobodave', 'p4ssw3rd', 'test');
$handle = fopen('test.sql', 'rb');
if ($handle) {
while (!feof($handle)) {
// This assumes you don't have a row that is > 1MB (1000000)
// which is unlikely given the size of your DB
// Note that it has a DIRECT effect on your scripts memory
// usage.
$buffer = stream_get_line($handle, 1000000, ";\n");
$mysqli->query($buffer);
}
}
echo "Peak MB: ",memory_get_peak_usage(true)/1024/1024;
This will utilize an absurdly low amount of memory as shown below:
daves-macbookpro:~ hobodave$ du -hs test.sql
15M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 1.75
real 2m55.619s
user 0m4.998s
sys 0m4.588s
What that says is you processed a 15MB mysqldump with a peak RAM usage of 1.75 MB in just under 3 minutes.
Alternate Export
If you have a high enough memory_limit and this is too slow, you can try this using the following export:
~: mysqldump test --opt | grep -v '^--' | grep . > test.sql
This will allow extended inserts, which insert multiple rows in a single query. Here are the statistics for the same datbase:
daves-macbookpro:~ hobodave$ du -hs test.sql
11M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 3.75
real 0m23.878s
user 0m0.110s
sys 0m0.101s
Notice that it uses over 2x the RAM at 3.75 MB, but takes about 1/6th as long. I suggest trying both methods and seeing which suits your needs.
Edit:
I was unable to get a newline to appear literally in any mysqldump output using any of CHAR, VARCHAR, BINARY, VARBINARY, and BLOB field types. If you do have BLOB/BINARY fields though then please use the following just in case:
~: mysqldump5 test --hex-blob --opt | grep -v '^--' | grep . > test.sql
Can you use LOAD DATA INFILE?
If you format your db dump file using SELECT INTO OUTFILE, this should be exactly what you need. No reason to have PHP parse anything.
I ran into the same problem. I solved it using a regular expression:
function splitQueryText($query) {
// the regex needs a trailing semicolon
$query = trim($query);
if (substr($query, -1) != ";")
$query .= ";";
// i spent 3 days figuring out this line
preg_match_all("/(?>[^;']|(''|(?>'([^']|\\')*[^\\\]')))+;/ixU", $query, $matches, PREG_SET_ORDER);
$querySplit = "";
foreach ($matches as $match) {
// get rid of the trailing semicolon
$querySplit[] = substr($match[0], 0, -1);
}
return $querySplit;
}
$queryList = splitQueryText($inputText);
foreach ($queryList as $query) {
$result = mysql_query($query);
}
Already answered: Loading .sql files from within PHP
Also:
http://webxadmin.free.fr/article/import-huge-mysql-dumps-using-php-only-342.php
http://www.phpbuilder.com/board/showthread.php?t=10323180
http://forums.tizag.com/archive/index.php?t-3581.html
Splitting a query cannot be reliably done without parsing. Here is valid SQL that would be impossible to split correctly with a regular expression.
SELECT ";"; SELECT ";\"; a;";
SELECT ";
abc";
I wrote a small SqlFormatter class in PHP that includes a query tokenizer. I added a splitQuery method to it that splits all queries (including the above example) reliably.
https://github.com/jdorn/sql-formatter/blob/master/SqlFormatter.php
You can remove the format and highlight methods if you don't need them.
One downside is that it requires the whole sql string to be in memory, which could be a problem if you're working with huge sql files. I'm sure with a little bit of tinkering, you could make the getNextToken method work on a file pointer instead.
First at all thanks for this topic. This saved a lot of time for me :)
And let me to make little fix for your code.
Sometimes if TRIGGERS or PROCEDURES is in dump file, it is not enough to examine the ; delimiters.
In this case may be DELIMITER [something] in sql code, to say that the statement will not end with ; but [something]. For example a section in xxx.sql:
DELIMITER //
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END
//
DELIMITER ;
So first need to have a falg, to detect, that query does not ends with ;
And delete the unqanted query chunks, because the mysql_query does not need delimiter
(the delimiter is the end of string)
so mysql_query need someting like this:
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END;
So a little work and here is the fixed code:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
$matches = array();
$otherDelimiter = false;
if (is_file($file) === true) {
$file = fopen($file, 'r');
if (is_resource($file) === true) {
$query = array();
while (feof($file) === false) {
$query[] = fgets($file);
if (preg_match('~' . preg_quote('delimiter', '~') . '\s*([^\s]+)$~iS', end($query), $matches) === 1){
//DELIMITER DIRECTIVE DETECTED
array_pop($query); //WE DON'T NEED THIS LINE IN SQL QUERY
if( $otherDelimiter = ( $matches[1] != $delimiter )){
}else{
//THIS IS THE DEFAULT DELIMITER, DELETE THE LINE BEFORE THE LAST (THAT SHOULD BE THE NOT DEFAULT DELIMITER) AND WE SHOULD CLOSE THE STATEMENT
array_pop($query);
$query[]=$delimiter;
}
}
if ( !$otherDelimiter && preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1) {
$query = trim(implode('', $query));
if (mysql_query($query) === false){
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}else{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0){
ob_end_flush();
}
flush();
}
if (is_string($query) === true) {
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I hope i could help somebody too.
Have a nice day!
http://www.ozerov.de/bigdump/ was very useful for me in importing 200+ MB sql file.
Note:
SQL file should be already present in the server so that the process can be completed without any issue
You can use phpMyAdmin for importing the file. Even if it is huge, just use UploadDir configuration directory, upload it there and choose it from phpMyAdmin import page. Once file processing will be close to the PHP limits, phpMyAdmin interrupts importing, shows you again import page with predefined values indicating where to continue in the import.
what do you think about:
system("cat xxx.sql | mysql -l username database");

Categories