php fgetcsv multiple lines not only one or all - php

I wand to read biiiiig CSV-Files and want to insert them into a database. That already works:
if(($handleF = fopen($path."\\".$file, 'r')) !== false){
$i = 1;
// loop through the file line-by-line
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->executeInsert($data, $tableFields);
}
unset($dataRow);
}
$i++;
}
fclose($handleF);
}
My problem of this solution is, that it's very slow. But the files are too big to put it directly into the memory... So I wand to ask, if there a posibility to read, for example 10 lines, into the $dataRow array not only one or all.
I want to get a better balance between the memory and the performance.
Do you understand what i mean? Thanks for help.
Greetz
V
EDIT:
Ok, I still have to try to find a solution with the MSSQL-Database. My solution was to stack the data and than make a multiple-MSSQL-Insert:
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->setCurrentRow($i);
if(count($dataStack) > 210){
array_push($dataStack, $data);
#echo '<pre>', print_r($dataStack), '</pre>';
$this->executeInsert($dataStack, $tableFields, true);
// reset the stack
unset($dataStack);
$dataStack = array();
} else {
array_push($dataStack, $data);
}
unset($data);
}
$i++;
unset($dataRow);
}
}
Finaly I have to loop the Stack and build in mulitiple Insert in the method "executeInsert", to create a query like this:
INSERT INTO [myTable] (field1, field2) VALUES ('data1', 'data2'),('data2', 'datta3')...
That works much better. I still have to check the best balance, but therefor i can change only the value '210' in the code above. I hope that help's everybody with a similar problem.
Attention: Don't forget to execute the method "executeInsert" again after readin the complete file, because it could happen that there are still some data in the stack and the method will only be executed when the stack reach the size of 210....
Greetz
V

I think your bottleneck is not reading the file. Which is a text file. Your bottleneck is the INSERT in the SQL table.
Do something, just comment the line that actually do the insert and you will see the difference.
I had this same issue in the past, where i did exactly what you are doing. reading a 5+ million lines CSV and inserting it in a Mysql table. The execution time was 60 hours which is
unrealistic.
My solutions was switch to another db technology. I selected MongoDB and the execution time
was reduced to 5 minutes. MongoDB performs really fast on this scenarios and also have a tool called mongoimport that will allow you to import a csv file firectly from the command line.
Give it a try if the db technology is not a limitation on your side.
Another solution will be spliting the huge CSV file into chunks and then run the same php script multiple times in parallel and each one will take care of the chunks with an specific preffix or suffix on the filename.
I don't know which specific OS are you using, but in Unix/Linux there is a command line tool
called split that will do that for you and will also add any prefix or suffix you want to the filename of the chunks.

Related

php - for loop repeating itself / going out of sequence

I'm very new to PHP, making errors and learning as I go. Please be gentle! :)
I want to access some data from Blizzard.com's API. For this particular data set, it's not a block of data in JSON, rather each object has it's own URL to access. I estimate that there are approx 150000 objects, however I don't know the start or end points of the number range. So I'm having to assume 1 and work past the highest number I know (269065)
To get the data, I need to access each object's data via a JSON file, which I read, get the contents of & drop in to a text file (this could be written as an insert in to a SQL db too, as I'm able to do this if it's the text file that's the issue). But to be honest, I would love to get to the bottom of why this is happening as much as anything!
I wasn't going to try and run ~250000 iterations in a for loop, I thought I'd try something I considered small, 2000.
The for loop starts with $a as 1, uses $a as part of the URL, loads & decodes the JSON, checks to see if the first field (ID) in the object is set, if it is, it writes a few fields to data.txt & if the first field (ID) isn't set it just writes $a to data.txt (so I know it's a null for other purposes not outlined here).
Simple! Or so I thought, after approx after 183 iterations, the data written to the text file goes awry as seen by the quote below. It is out of sequence and starts at 1 again, then back to 184 ad nauseam. The loop then seems to be locked in some kind of infinite loop of running, outputting in a random order until I close the page 10-20 minutes later.
I have obviously made a big mistake! But I have no idea what I have done wrong to have caused this. During my attempts I have rewritten the code with new variable names, so a new text does not conflict with code that could be running in memory.
I've tried resetting variables to blank at the end of the loop in case it something was being reused that was causing a problem.
If anyone could point out any errors in my code, or suggest something for me to look in to, to handle bigger loops that would be brilliant. I am assuming my issue may be a time out or memory problem. But I don't know where to start & was hoping I'd find some suggestions here.
If it's relevant, I am using 000webhostapp.com as my host provider for now, until I get some paid for hosting.
1 ... 182 183 1 184 2 3 185 4 186 5 187 6 188 7 189 190 8 191
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$file = fopen("data.txt","a");
fwrite($file,$data['id'].",'".$data['name']."'\n");
fclose($file);
} else {
$file = fopen("data.txt","a");
fwrite($file,$a."\n");
fclose($file);
}
}
The content of the file I'm trying to access is
{"id":33994,"name":"Precise Strikes","profession":"Enchanting","icon":"spell_holy_greaterheal"}
I scrapped the original plan and wrote this instead. Thank you again who took the time out of their day to help and offer suggestions!
$b = $mysqli->query("SELECT id FROM `static_recipes` order by id desc LIMIT 1;")->fetch_object()->id;
if (empty($b)) {$b=1;};
$count = $b+101;
$write = [];
for ($a = $b+1; $a < $count; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&apikey=";
$contents = #file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = "(".$data['id'].",'".addslashes($data['name'])."','".addslashes($data['profession'])."','".addslashes($data['icon'])."')";
} else {
$write [] = "(".$a.",'a','a','a'".")";
}
}
$SQL = ('INSERT INTO `static_recipes` (id, name, profession, icon) VALUES '.implode(',', $write));
$mysqli->query($SQL);
$mysqli->close();
$write = [];
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = $data['id'].",'".$data['name']."'\n";
} else {
$write [] = $a."\n";
}
}
$file = fopen("data.txt","a");
fwrite($file, implode('', $write));
fclose($file);
Also, why you are think what some IDS isn't duplicated at several "https://eu.api.battle.net/wow/[N]" urls data?
Also if you are I wasn't going to try and run ~250000 think about curl_multi_init(): http://php.net/manual/en/function.curl-multi-init.php
I can't really see anything obviously wrong with your code, can't run it though as I don't have the JSON
It could be possible that there is some kind of race condition since you're opening and closing the same file hundreds of times very quickly.
File operations might seem atomic but not necessarily so - here's an interesting SO thread:
Does PHP wait for filesystem operations (like file_put_contents) to complete before moving on?
Like some others' suggested - maybe just open the file before you enter the loop then close the file when the loop breaks.
I'd try it first and see if it helps.
There's nothing in your original code that would cause that sort of behaviour. PHP will not arbitrarily change the value of a variable. You are opening this file in append mode, are you certain that you're not looking at old data? Maybe output some debug messages as you process the data. It's likely you'd run up against some rate limiting on the API server, so putting a pause in there somewhere may improve reliability.
The only substantive change I'd suggest to your code is opening the file once and closing it when you're done.
$file = fopen("data_1_2000.txt", "w");
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/$a?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents, true);
if (!empty($data['id'])) {
$data["name"] = str_replace("'", "\\'", $data["name"]);
$record = "$data[id],'$data[name]'";
} else {
$record = $a;
}
fwrite($file, "$record\n");
sleep(1);
echo "$a "; if ($a % 50 === 0) echo "\n";
}
fclose($file);

how to use fopen, fgets, flose properly? It works fine but later sometime the count just goes down to random number

It works fine but later sometime the count just goes down to random
number. My guess is my code cannot process multiple visits at a time.
Where increment heppens
Where it displays the count
<?php
$args_loveteam = array('child_of' => 474);
$loveteam_children = get_categories($args_loveteam);
if(in_category('loveteams', $post->ID)){
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if(in_category($loveteam_child->name)){
/* counter */
// opens file to read saved hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","r");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","r");
}
$count = fgets($datei,1000);
fclose($datei);
$count=$count + 1 ;
// opens file to change new hit number
if($loveteam_child->slug == "loveteam-mayward"){
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug."-2.txt","w");
}else{
$datei = fopen($_SERVER['DOCUMENT_ROOT']."/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-".$post_slug.".txt","w");
}
fwrite($datei, $count);
fclose($datei);
}
}
}
?>
I would at least change your code to this
foreach ($loveteam_children as $loveteam_child) {
$post_slug = $loveteam_child->slug;
echo "<script>console.log('".$post_slug."');</script>";
if($loveteam_child->slug == "loveteam-mayward"){
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}.txt";
}else{
$filename = "{$_SERVER['DOCUMENT_ROOT']}/wp-content/themes/inside-showbiz-Vfeb13.ph-updated/countlog-{$post_slug}-2.txt";
}
$count = file_get_contents($filename);
file_get_contents($filename, ++$count, LOCK_EX);
}
You could also try flock on the file to get a lock before modifying it. That way if another process comes along it has to wait on the first one. But file_put_contents works great for things like logging where you may have many processes competing for the same file.
Database should be ok, but even that may not be fast enough. It shouldn't mess up your data though.
Anyway hope it helps. This is kind of an odd question, concurrency can be a real pain if you have a high chance of process collisions and race conditions etc etc.
However as I mentioned (in the comments) using the filesystem is probably not going to provide the consistency you need. Probably the best for this may be some kind of in memory storage such as Redis. But that is hard to say without full knowing what you use it for. For example if it should persist on server reboot.
Hope it helps, good luck.

Handle Large File with PHP

I have a file with the size of around 10 GB or more. The file contains only numbers ranging from 1 to 10 on each line and nothing else. Now the task is to read the data[numbers] from the file and then sort the numbers in ascending or descending order and create a new file with the sorted numbers.
Can anyone of you please help me with the answer?
I'm assuming this is somekind of homework and goal for this is to sort more data than you can hold in your RAM?
Since you only have numbers 1-10, this is not that complicated task. Just open your input file and count how many occourances of every specific number you have. After that you can construct simple loop and write values into another file. Following example is pretty self explainatory.
$inFile = '/path/to/input/file';
$outFile = '/path/to/output/file';
$input = fopen($inFile, 'r');
if ($input === false) {
throw new Exception('Unable to open: ' . $inFile);
}
//$map will be array with size of 10, filled with 0-s
$map = array_fill(1, 10, 0);
//Read file line by line and count how many of each specific number you have
while (!feof($input)) {
$int = (int) fgets($input);
$map[$int]++;
}
fclose($input);
$output = fopen($outFile, 'w');
if ($output === false) {
throw new Exception('Unable to open: ' . $outFile);
}
/*
* Reverse array if you need to change direction between
* ascending and descending order
*/
//$map = array_reverse($map);
//Write values into your output file
foreach ($map AS $number => $count) {
$string = ((string) $number) . PHP_EOL;
for ($i = 0; $i < $count; $i++) {
fwrite($output, $string);
}
}
fclose($output);
Taking into account the fact, that you are dealing with huge files, you should also check script execution time limit for your PHP environment, following example will take VERY long for 10GB+ sized files, but since I didn't see any limitations concerning execution time and performance in your question, I'm assuming it is OK.
I had a similar issue before. Trying to manipulate such a large file ended up being huge drain on resources and it couldn't cope. The easiest solution I ended up with was to try and import it into a MySQL database using a fast data dump function called LOAD DATA INFILE
http://dev.mysql.com/doc/refman/5.1/en/load-data.html
Once it's in you should be able to manipulate the data.
Alternatively, you could just read the file line by line while outputting the result into another file line by line with the sorted numbers. Not too sure how well this would work though.
Have you had any previous attempts at it or are you just after a possible method of doing it?
If that's all you don't need PHP (if you have a Linux maschine at hand):
sort -n file > file_sorted-asc
sort -nr file > file_sorted-desc
Edit: OK, here's your solution in PHP (if you have a Linux maschine at hand):
<?php
// Sort ascending
`sort -n file > file_sorted-asc`;
// Sort descending
`sort -nr file > file_sorted-desc`;
?>
:)

Crunch lots of files to generate stats file

I have a bunch of files I need to crunch and I'm worrying about scalability and speed.
The filename and filedata(only the first line) is stored into an array in RAM to create some statical files later in the script.
The files must remain files and can't be put into a databases.
The filename are formatted in the following fashion :
Y-M-D-title.ext (where Y is Year, M for Month and D for Day)
I'm actually using glob to list all the files and create my array :
Here is a sample of the code creating the array "for year" or "month" (It's used in a function with only one parameter -> $period)
[...]
function create_data_info($period=NULL){
$data = array();
$files = glob(ROOT_DIR.'/'.'*.ext');
$size = sizeOf($files);
$existing_title = array(); //Used so we can handle having the same titles two times at different date.
if (isSet($period)){
if ( "year" === $period ){
for ($i = 0; $i < $size; $i++) {
$info = extract_info($files[$i], $existing_file);
//Create the data array with all the data ordered by year/month/day
$data[(int)$info[5]][] = $info;
unset($info);
}
}elseif ( "month" === $period ){
for ($i = 0; $i < $size; $i++) {
$info = extract_info($files[$i], $existing_file);
$key = $info[5].$info[6];
//Create the data array with all the data ordered by year/month/day
$data[(int)$key][] = $info;
unset($info);
}
}
}
[...]
}
function extract_info($file, &$existing){
$full_path_file = $file;
$file = basename($file);
$info_file = explode("-", $file, 4);
$filetitle = explode(".", $info_file[3]);
$info[0] = $filetitle[0];
if (!isSet($existing[$info[0]]))
$existing[$info[0]] = -1;
$existing[$info[0]] += 1;
if ($existing[$info[0]] > 0)
//We have already found a post with this title
//the creation of the cache is based on info[4] data for the filename
//so we need to tune it
$info[0] = $info[0]."-".$existing[$info[0]];
$info[1] = $info_file[3];
$info[2] = $full_path_file;
$post_content = file(ROOT_DIR.'/'.$file, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
$info[3] = $post_content[0]; //first line of the files
unset($post_content);
$info[4] = filemtime(ROOT_DIR.'/'.$file);
$info[5] = $info_file[0]; //year
$info[6] = $info_file[1]; //month
$info[7] = $info_file[2]; //day
return $info;
}
So in my script I only call create_data_info(PERIOD) (PERIOD being "year", "month", etc..)
It returns an array filled with the info I need, and then I can loop throught it to create my statistics files.
This process is done everytime the PHP script is launched.
My question is : is this code optimal (certainly not) and what can I do to squeeze some juice from my code ?
I don't know how I can cache this (even if it's possible), as there is a lot of I/O involved.
I can change the tree structure if it could change things compared to a flat structure, but from what I found out with my tests it seems flat is the best.
I already thought about creating a little "booster" in C doing only the crunching, but I since it's I/O bound, I don't think it would make a huge difference and the application would be a lot less compatible for shared hosting users.
Thank you very much for your input, I hope I was clear enough here. Let me know if you need clarification (and forget my english mistakes).
To begin with you should use DirectoryIterator instead of glob function. When it comes to scandir vs opendir vs glob, glob is as slow as it gets.
Also, when you are dealing with a large amount of files you should try to do all your processing inside one loop, php function calls are rather slow.
I see you are using unset($info); yet in every loop you make, $info gets new value. Php does its own garbage collection, if thats your concern. Unset is a language construct not a function and should be pretty fast, but when using not needed, it still makes whole thing a bit slower.
You are passing $existing as a reference. Is there practical outcome for this? In my experience references make things slower.
And at last your script seems to deal with a lot of string processing. You might want to consider somekind of "serialize data and base64 encode/decode" solution, but you should benchmark that specifically, might be faster, might be slower depenging on your whole code. (My idea is that, serialize/unserialize MIGHT run faster as these are native php functions and custom functions with string processing are slower).
My answer was not very I/O related but I hope it was helpful.

Best practice: Import mySQL file in PHP; split queries

I have a situation where I have to update a web site on a shared hosting provider. The site has a CMS. Uploading the CMS's files is pretty straightforward using FTP.
I also have to import a big (relative to the confines of a PHP script) database file (Around 2-3 MB uncompressed). Mysql is closed for access from the outside, so I have to upload a file using FTP, and start a PHP script to import it. Sadly, I do not have access to the mysql command line function so I have to parse and query it using native PHP. I also can't use LOAD DATA INFILE. I also can't use any kind of interactive front-end like phpMyAdmin, it needs to run in an automated fashion. I also can't use mysqli_multi_query().
Does anybody know or have a already coded, simple solution that reliably splits such a file into single queries (there could be multi-line statements) and runs the query. I would like to avoid to start fiddling with it myself due to the many gotchas that I'm likely to come across (How to detect whether a field delimiter is part of the data; how to deal with line breaks in memo fields; and so on). There must be a ready made solution for this.
Here is a memory-friendly function that should be able to split a big file in individual queries without needing to open the whole file at once:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
if (is_file($file) === true)
{
$file = fopen($file, 'r');
if (is_resource($file) === true)
{
$query = array();
while (feof($file) === false)
{
$query[] = fgets($file);
if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
{
$query = trim(implode('', $query));
if (mysql_query($query) === false)
{
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}
else
{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0)
{
ob_end_flush();
}
flush();
}
if (is_string($query) === true)
{
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I tested it on a big phpMyAdmin SQL dump and it worked just fine.
Some test data:
CREATE TABLE IF NOT EXISTS "test" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"name" TEXT,
"description" TEXT
);
BEGIN;
INSERT INTO "test" ("name", "description")
VALUES (";;;", "something for you mind; body; soul");
COMMIT;
UPDATE "test"
SET "name" = "; "
WHERE "id" = 1;
And the respective output:
SUCCESS: CREATE TABLE IF NOT EXISTS "test" ( "id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT, "description" TEXT );
SUCCESS: BEGIN;
SUCCESS: INSERT INTO "test" ("name", "description") VALUES (";;;", "something for you mind; body; soul");
SUCCESS: COMMIT;
SUCCESS: UPDATE "test" SET "name" = "; " WHERE "id" = 1;
Single page PHPMyAdmin - Adminer - Just one PHP script file.
check : http://www.adminer.org/en/
When StackOverflow released their monthly data dump in XML format, I wrote PHP scripts to load it into a MySQL database. I imported about 2.2 gigabytes of XML in a few minutes.
My technique is to prepare() an INSERT statement with parameter placeholders for the column values. Then use XMLReader to loop over the XML elements and execute() my prepared query, plugging in values for the parameters. I chose XMLReader because it's a streaming XML reader; it reads the XML input incrementally instead of requiring to load the whole file into memory.
You could also read a CSV file one line at a time with fgetcsv().
If you're inporting into InnoDB tables, I recommend starting and committing transactions explicitly, to reduce the overhead of autocommit. I commit every 1000 rows, but this is arbitrary.
I'm not going to post the code here (because of StackOverflow's licensing policy), but in pseudocode:
connect to database
open data file
PREPARE parameterizes INSERT statement
begin first transaction
loop, reading lines from data file: {
parse line into individual fields
EXECUTE prepared query, passing data fields as parameters
if ++counter % 1000 == 0,
commit transaction and begin new transaction
}
commit final transaction
Writing this code in PHP is not rocket science, and it runs pretty quickly when one uses prepared statements and explicit transactions. Those features are not available in the outdated mysql PHP extension, but you can use them if you use mysqli or PDO_MySQL.
I also added convenient stuff like error checking, progress reporting, and support for default values when the data file doesn't include one of the fields.
I wrote my code in an abstract PHP class that I subclass for each table I need to load. Each subclass declares the columns it wants to load, and maps them to fields in the XML data file by name (or by position if the data file is CSV).
Can't you install phpMyAdmin, gzip the file (which should make it much smaller) and import it using phpMyAdmin?
EDIT: Well, if you can't use phpMyAdmin, you can use the code from phpMyAdmin. I'm not sure about this particular part, but it's generaly nicely structured.
Export
The first step is getting the input in a sane format for parsing when you export it. From your question
it appears that you have control over the exporting of this data, but not the importing.
~: mysqldump test --opt --skip-extended-insert | grep -v '^--' | grep . > test.sql
This dumps the test database excluding all comment lines and blank lines into test.sql. It also disables
extended inserts, meaning there is one INSERT statement per line. This will help limit the memory usage
during the import, but at a cost of import speed.
Import
The import script is as simple as this:
<?php
$mysqli = new mysqli('localhost', 'hobodave', 'p4ssw3rd', 'test');
$handle = fopen('test.sql', 'rb');
if ($handle) {
while (!feof($handle)) {
// This assumes you don't have a row that is > 1MB (1000000)
// which is unlikely given the size of your DB
// Note that it has a DIRECT effect on your scripts memory
// usage.
$buffer = stream_get_line($handle, 1000000, ";\n");
$mysqli->query($buffer);
}
}
echo "Peak MB: ",memory_get_peak_usage(true)/1024/1024;
This will utilize an absurdly low amount of memory as shown below:
daves-macbookpro:~ hobodave$ du -hs test.sql
15M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 1.75
real 2m55.619s
user 0m4.998s
sys 0m4.588s
What that says is you processed a 15MB mysqldump with a peak RAM usage of 1.75 MB in just under 3 minutes.
Alternate Export
If you have a high enough memory_limit and this is too slow, you can try this using the following export:
~: mysqldump test --opt | grep -v '^--' | grep . > test.sql
This will allow extended inserts, which insert multiple rows in a single query. Here are the statistics for the same datbase:
daves-macbookpro:~ hobodave$ du -hs test.sql
11M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 3.75
real 0m23.878s
user 0m0.110s
sys 0m0.101s
Notice that it uses over 2x the RAM at 3.75 MB, but takes about 1/6th as long. I suggest trying both methods and seeing which suits your needs.
Edit:
I was unable to get a newline to appear literally in any mysqldump output using any of CHAR, VARCHAR, BINARY, VARBINARY, and BLOB field types. If you do have BLOB/BINARY fields though then please use the following just in case:
~: mysqldump5 test --hex-blob --opt | grep -v '^--' | grep . > test.sql
Can you use LOAD DATA INFILE?
If you format your db dump file using SELECT INTO OUTFILE, this should be exactly what you need. No reason to have PHP parse anything.
I ran into the same problem. I solved it using a regular expression:
function splitQueryText($query) {
// the regex needs a trailing semicolon
$query = trim($query);
if (substr($query, -1) != ";")
$query .= ";";
// i spent 3 days figuring out this line
preg_match_all("/(?>[^;']|(''|(?>'([^']|\\')*[^\\\]')))+;/ixU", $query, $matches, PREG_SET_ORDER);
$querySplit = "";
foreach ($matches as $match) {
// get rid of the trailing semicolon
$querySplit[] = substr($match[0], 0, -1);
}
return $querySplit;
}
$queryList = splitQueryText($inputText);
foreach ($queryList as $query) {
$result = mysql_query($query);
}
Already answered: Loading .sql files from within PHP
Also:
http://webxadmin.free.fr/article/import-huge-mysql-dumps-using-php-only-342.php
http://www.phpbuilder.com/board/showthread.php?t=10323180
http://forums.tizag.com/archive/index.php?t-3581.html
Splitting a query cannot be reliably done without parsing. Here is valid SQL that would be impossible to split correctly with a regular expression.
SELECT ";"; SELECT ";\"; a;";
SELECT ";
abc";
I wrote a small SqlFormatter class in PHP that includes a query tokenizer. I added a splitQuery method to it that splits all queries (including the above example) reliably.
https://github.com/jdorn/sql-formatter/blob/master/SqlFormatter.php
You can remove the format and highlight methods if you don't need them.
One downside is that it requires the whole sql string to be in memory, which could be a problem if you're working with huge sql files. I'm sure with a little bit of tinkering, you could make the getNextToken method work on a file pointer instead.
First at all thanks for this topic. This saved a lot of time for me :)
And let me to make little fix for your code.
Sometimes if TRIGGERS or PROCEDURES is in dump file, it is not enough to examine the ; delimiters.
In this case may be DELIMITER [something] in sql code, to say that the statement will not end with ; but [something]. For example a section in xxx.sql:
DELIMITER //
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END
//
DELIMITER ;
So first need to have a falg, to detect, that query does not ends with ;
And delete the unqanted query chunks, because the mysql_query does not need delimiter
(the delimiter is the end of string)
so mysql_query need someting like this:
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END;
So a little work and here is the fixed code:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
$matches = array();
$otherDelimiter = false;
if (is_file($file) === true) {
$file = fopen($file, 'r');
if (is_resource($file) === true) {
$query = array();
while (feof($file) === false) {
$query[] = fgets($file);
if (preg_match('~' . preg_quote('delimiter', '~') . '\s*([^\s]+)$~iS', end($query), $matches) === 1){
//DELIMITER DIRECTIVE DETECTED
array_pop($query); //WE DON'T NEED THIS LINE IN SQL QUERY
if( $otherDelimiter = ( $matches[1] != $delimiter )){
}else{
//THIS IS THE DEFAULT DELIMITER, DELETE THE LINE BEFORE THE LAST (THAT SHOULD BE THE NOT DEFAULT DELIMITER) AND WE SHOULD CLOSE THE STATEMENT
array_pop($query);
$query[]=$delimiter;
}
}
if ( !$otherDelimiter && preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1) {
$query = trim(implode('', $query));
if (mysql_query($query) === false){
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}else{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0){
ob_end_flush();
}
flush();
}
if (is_string($query) === true) {
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I hope i could help somebody too.
Have a nice day!
http://www.ozerov.de/bigdump/ was very useful for me in importing 200+ MB sql file.
Note:
SQL file should be already present in the server so that the process can be completed without any issue
You can use phpMyAdmin for importing the file. Even if it is huge, just use UploadDir configuration directory, upload it there and choose it from phpMyAdmin import page. Once file processing will be close to the PHP limits, phpMyAdmin interrupts importing, shows you again import page with predefined values indicating where to continue in the import.
what do you think about:
system("cat xxx.sql | mysql -l username database");

Categories