PHP incrementing all IDs in a CSV - php

thanks for reading!
I have an app that allows people to add, edit and delete items in a CSV. I've encountered a bug where if there are non-unique IDs and you try to edit or delete them, it will edit or delete all of them, as the system parses through the spreadsheet to find the ID - which also corresponds to the object's order when using it so the user must be able to change the ID
The solution I've come up with is quite simple, should the user edit an object and change its ID to one that already exists, then the system will take all of the objects with an ID bigger than or equal to the new ID and increment them all by one.
The following code is my if statement that checks whether the ID already exists
if($exists == "true") //does the $newImageID already exist in the gallery?
{
$table = fopen($fullURL,'r'); //$fullURL is the location of the CSV tested and works
$temp_table_two = fopen($tempURL,'w');
while (!feof($temp_table_two) ) {
$getid = fgetcsv($temp_table_two, 1024);
if($getid[0] >= $newImageID)
{
// $getid[0]++; //increase id in temp_table_two by 1 if it is > $newImageID
echo $getid[0];
}
}
fclose($table);
fclose($temp_table);
rename($tempURL,$fullURL);
}
This code takes place after fopen and before fclose. In context, $exists is either "true" or "false" (will change to boolean later on), the while loop parses through my $temp_table (a fopen) and if the first column object (the ID) is equal to or bigger than the one in the new ID then it is incremented. This means that the new object gets "slotted in" so to speak and pushes the rest down
Strangely my request is timing out after a long spinner after I execute this code and I have no idea what the problem is
Thanks for all your help in advance
EDIT: I have found the source of the problem is the while loop itself, should I comment everything out as such:
while (!feof($temp_table_two) ) {
$getid = fgetcsv($temp_table_two, 1024);
// if($getid[0] >= $newImageID)
// {
// // $getid[0]++; //increase id in temp_table_two by 1 if it is > $newImageID
// echo $getid[0];
// }
}
The code still doesn't work yet the only thing left to run is the loop that doesn't do anything
EDIT 2:
Following an answer, I did away with the temp table and just work from the table itself, this if statement is executed BEFORE adding the new data with its ID
if($exists == "true") //does the $newImageID already exist in the gallery?
{
$table = fopen($fullURL,'r+');
while (!feof($table) ) {
$getid = fgetcsv($table, 1024);
if($getid[0] >= $newImageID)
{
echo $getid[0];
$getid[0]++; //increase id in temp_table_two by 1 if it is > $newImageID
}
}
fclose($table);
}
The code no longer times out, but the items inside $getid[0] are not incremented. I have echoed them and it does echo all of the ID's equal to or bigger than my $newImageID but the $getid[0]++; doesn't seem to be affecting the CSV at all

You are testing if you reach the end of the temp file and that's wrong. You need to check the origin file and also read from it!
while (!feof($table) ) {
$getid = fgetcsv($table, 1024);

Try this:
if ($csv = fopen($temp_table_two, 'r+')) do {
$getid = fgetcsv($csv, 1024);
if($getid[0] >= $newImageID)
{
echo $getid[0]; // $getid[0]++;
}
} while (!feof($csv));
That will prevent your while loop from timing out due to being stuck in an infinite if there is a problem opening the file. feof will return true only if it reaches EOF, it will return false otherwise which will cause it to never be able to break out.
For actually writing your data back to the CSV file, your current code won't work as fgetcsv just gives you an array representation of a CSV line in the file. Writing to that array just changes the array, not back to the file.
For that, see this similar answer: Append data to middle line/row of a csv instead of the last line or row
or
http://php.net/manual/en/function.fputcsv.php

Related

PHP - Trying to show Next and Previous file from the same directory

As the title sais, I'm trying to get the next and previous file from the same directory. So I did some this like this. Is there any better way of doing it? (This is from next auto index file.php code about related files, I have change it for my needs.)
db screenshot if you want to look - ibb.co/wzkDxd3
$title = $file->name; //get the current file name
$in_dir=$file->indir; //current dir id
$r_file = $db->select("SELECT * FROM `". MAI_PREFIX ."files` WHERE `indir`='$in_dir'"); //all of the file from the current dir
$rcount=count($r_file);
$related='';
if($rcount > 2){
$i = 0; // temp variable
foreach($r_file as $key => $r){ //foreach the array to get the key
if($r->name == $title){ //Trying to get the current file key number
$next =$key+1; //Getting next and prev file key number
$prv =$key-1;
foreach($r_file as $keyy => $e){ //getting the file list again to get the prev file
if($prv == $keyy){
$related .=$e->name;
}
}
foreach($r_file as $keyy => $e){ // same for the next file
if($next == $keyy){
$related .=$e->name;
}
}
}
}
Without knowing your DB background and use case, there still should be the possibility to use something like $r_file[$key], $r_file[$next] and $r_file[$prev] to directly access the specific elements. So at least two of your foreach loops could be avoided.
Please note, that nesting loops is extremely inefficient. E. g., if your $r_file contains 100 elements, this would mean 10.000 iterations (100 times 100) with your original code!
Also, you should leave a loop as soon as possible once its task is done. You can use break to do this.
Example, based on the relevant part of your code and how I understand it is supposed to work:
foreach($r_file as $key => $r){ //foreach the array to get the key
if($r->name == $title) { //Trying to get the current file key number
$next =$key+1; //Getting next and prev file key number
$prv =$key-1;
$related .= $r_file[$prv]->name; //Directly accessing the previous file
$related .= $r_file[$next]->name; //Directly accessing the next file
break; // Don't go on with the rest of the elements, if we're already done
}
}
Possibly, looping through all the elements to compare $r->name == $title could also be avoided by using some numbering mechanisms, but without knowing your system better, I can't tell anything more about that.

php - for loop repeating itself / going out of sequence

I'm very new to PHP, making errors and learning as I go. Please be gentle! :)
I want to access some data from Blizzard.com's API. For this particular data set, it's not a block of data in JSON, rather each object has it's own URL to access. I estimate that there are approx 150000 objects, however I don't know the start or end points of the number range. So I'm having to assume 1 and work past the highest number I know (269065)
To get the data, I need to access each object's data via a JSON file, which I read, get the contents of & drop in to a text file (this could be written as an insert in to a SQL db too, as I'm able to do this if it's the text file that's the issue). But to be honest, I would love to get to the bottom of why this is happening as much as anything!
I wasn't going to try and run ~250000 iterations in a for loop, I thought I'd try something I considered small, 2000.
The for loop starts with $a as 1, uses $a as part of the URL, loads & decodes the JSON, checks to see if the first field (ID) in the object is set, if it is, it writes a few fields to data.txt & if the first field (ID) isn't set it just writes $a to data.txt (so I know it's a null for other purposes not outlined here).
Simple! Or so I thought, after approx after 183 iterations, the data written to the text file goes awry as seen by the quote below. It is out of sequence and starts at 1 again, then back to 184 ad nauseam. The loop then seems to be locked in some kind of infinite loop of running, outputting in a random order until I close the page 10-20 minutes later.
I have obviously made a big mistake! But I have no idea what I have done wrong to have caused this. During my attempts I have rewritten the code with new variable names, so a new text does not conflict with code that could be running in memory.
I've tried resetting variables to blank at the end of the loop in case it something was being reused that was causing a problem.
If anyone could point out any errors in my code, or suggest something for me to look in to, to handle bigger loops that would be brilliant. I am assuming my issue may be a time out or memory problem. But I don't know where to start & was hoping I'd find some suggestions here.
If it's relevant, I am using 000webhostapp.com as my host provider for now, until I get some paid for hosting.
1 ... 182 183 1 184 2 3 185 4 186 5 187 6 188 7 189 190 8 191
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$file = fopen("data.txt","a");
fwrite($file,$data['id'].",'".$data['name']."'\n");
fclose($file);
} else {
$file = fopen("data.txt","a");
fwrite($file,$a."\n");
fclose($file);
}
}
The content of the file I'm trying to access is
{"id":33994,"name":"Precise Strikes","profession":"Enchanting","icon":"spell_holy_greaterheal"}
I scrapped the original plan and wrote this instead. Thank you again who took the time out of their day to help and offer suggestions!
$b = $mysqli->query("SELECT id FROM `static_recipes` order by id desc LIMIT 1;")->fetch_object()->id;
if (empty($b)) {$b=1;};
$count = $b+101;
$write = [];
for ($a = $b+1; $a < $count; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&apikey=";
$contents = #file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = "(".$data['id'].",'".addslashes($data['name'])."','".addslashes($data['profession'])."','".addslashes($data['icon'])."')";
} else {
$write [] = "(".$a.",'a','a','a'".")";
}
}
$SQL = ('INSERT INTO `static_recipes` (id, name, profession, icon) VALUES '.implode(',', $write));
$mysqli->query($SQL);
$mysqli->close();
$write = [];
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = $data['id'].",'".$data['name']."'\n";
} else {
$write [] = $a."\n";
}
}
$file = fopen("data.txt","a");
fwrite($file, implode('', $write));
fclose($file);
Also, why you are think what some IDS isn't duplicated at several "https://eu.api.battle.net/wow/[N]" urls data?
Also if you are I wasn't going to try and run ~250000 think about curl_multi_init(): http://php.net/manual/en/function.curl-multi-init.php
I can't really see anything obviously wrong with your code, can't run it though as I don't have the JSON
It could be possible that there is some kind of race condition since you're opening and closing the same file hundreds of times very quickly.
File operations might seem atomic but not necessarily so - here's an interesting SO thread:
Does PHP wait for filesystem operations (like file_put_contents) to complete before moving on?
Like some others' suggested - maybe just open the file before you enter the loop then close the file when the loop breaks.
I'd try it first and see if it helps.
There's nothing in your original code that would cause that sort of behaviour. PHP will not arbitrarily change the value of a variable. You are opening this file in append mode, are you certain that you're not looking at old data? Maybe output some debug messages as you process the data. It's likely you'd run up against some rate limiting on the API server, so putting a pause in there somewhere may improve reliability.
The only substantive change I'd suggest to your code is opening the file once and closing it when you're done.
$file = fopen("data_1_2000.txt", "w");
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/$a?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents, true);
if (!empty($data['id'])) {
$data["name"] = str_replace("'", "\\'", $data["name"]);
$record = "$data[id],'$data[name]'";
} else {
$record = $a;
}
fwrite($file, "$record\n");
sleep(1);
echo "$a "; if ($a % 50 === 0) echo "\n";
}
fclose($file);

Wordpress query with 19000+ products

I have problem with my Wordpress Query.
What I'm try to do:
I have CSV file with products data(name, price, stock, sku etc.)
And I want to import this file, but when I'm trying to get Product ID by SKU my query is too high for my server, but I'm doing some stupid idea : in foreach I'm trying to get all product_id.
It's possible to split my wp query without killing my server?
I'm trying sleep but this is no result...
My code is here:
public function new_import_stock_prices(){
global $wpdb;
global $post;
if ( !function_exists( 'wc_get_product_id_by_sku' ) ) {
require_once '/includes/wc-product-functions.php';
}
echo '<h1>Import stanów magazynowych i cen z pliku CSV </h1>';
echo '<h4>Plik pobierany jest z netis/products.csv</h4>';
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$file_array = file($fn);
echo '<table>';
echo '<tr>';
echo '<td>LP</td>';
echo '<td>Nazwa</td>';
echo '<td>SKU</td>';
echo '<td>Stan magazynowy</td>';
echo '<td>Cena</td>';
echo '<td>Product ID</td>';
$i = 1;
if ( in_array( 'woocommerce/woocommerce.php', apply_filters( 'active_plugins', get_option( 'active_plugins' ) ) ) ) {
foreach ($file_array as $line_number =>&$line)
{
if ($line_number > 0 && $line_number % 10 == 0) {
$row2=explode('|',$line);
$sku = $row2[1];
// get the product ID from the SKU
$product_id = $wpdb->get_var( $wpdb->prepare( "SELECT post_id FROM $wpdb->postmeta WHERE meta_key='_sku' AND meta_value='%s' LIMIT 1", $sku ) );
// Get an instance of the WC_Product object
$product = new WC_Product( $product_id );
//Get product stock quantity and stock status
$stock_quantity = $product->get_stock_quantity();
$stock_status = $product->get_stock_status();
echo '<tr>';
echo '<td>'.$i.'</td>';
echo '<td>'.$row2[0].'</td>';
echo '<td>'.$row2[1].'</td>';
echo '<td>'.$row2[5].'</td>';
echo '<td>'.$row2[2].'</td>';
echo '<td>'.$product_id.'</td>';
echo '</tr>';
$i = $i +1;
sleep(10);
}
}
}
echo '</table>';
}
BTW. my wp_postmeta table has ~900 000+ records :O
And I want to import this file
I don't see any code for importing, I see code for displaying. Assuming by import, you mean display:
What's probably happening is one of a few things.
your running out of memory (you should get an error for this)
don't use file($fn) use file functions that open the file and read line by line, such as fgetcsv
your running out of time
not much you can do about this, except send less data
your overwhelming the browser buffer by sending to much output.
again not much you can do about this but send less data.
The only real solution (Assuming by import, you mean display) is to page the data.
Now even in a file you can page the data, but I would suggest using SQLFileObject instead of the procedural file functions. That said you can page using the procedural style but its by Byte Offset, not page number.
While I can't code an entire paging system I can give you some tips:
For example
//hard to tell how many lines in the file
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$f = fopen($fn, 'r');
fseek($f, $_GET['offset']); //seek to a byte offset
$i=0;
while(!feof($f) && ($row=fgetcsv($f)) && null !== $row[0]){
if($i==10)
$offset = ftell($f); //get byte offset
++$i;
}
ftell and fseek allow you get or move the file pointer (in bytes). So you can start reading from a predefined offset that you can pass around in the url ... etc.
You can do the same thing with SplFileObject, but a bit better.
try {
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$csv = new SplFileObject($fn, 'r');
} catch (RuntimeException $e ) {
printf("Error openning csv: %s\n", $e->getMessage());
}
$csv->seek($_GET['line']); //seek to a predefined line
while(!$csv->eof() && ($row = $csv->fgetcsv()) && null !== $row[0]) {
if(($csv->key()-$_GET['line'])==10)
$line = $csv->key(); //get line offset
++$i;
}
The main advantage of SPL is you can use the row number, which is much easier to work with.
You can also get the total number of lines in a file like this
$csv->seek(PHP_INT_MAX);
$total = $csv->key();
$csv->rewind(); //or $csv->seek($_GET['line'])
Basically this seeks to the largest possible INT PHP can handle, but because the file is a fixed length, it puts the pointer at the end of the file, then using key we can get the line number. Then we simply rewind to where we want to read from.
I mention the total number of rows because in paging it's nice to be able to show that.
Another option (to display)
Besides paging is to output the page without buffering.
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
//Flush (send) the output buffer and turn off output buffering
//ob_end_flush();
while (ob_get_level()) ob_end_flush();
// Implicitly flush the buffer(s)
ini_set('implicit_flush', true);
ob_implicit_flush(true);
Combine this with one of the methods I showed above to read the file 1 line at a time, and you may be able to eventually read all that data out.
Saving
For saving the data, your probably going to need to break it into batches, the same thing with paging can be done here (using offset or line). So that you only import a couple thousand rows at a time. I would also recommend not outputting the data, because you can give the browser more buffer then it can handle and lock it up. However if you page the data you can break it into small enough chunks that the browser can handle it.
You can even automate this using successive AJAX calls. Basically you would call the code on the backend to save a certain number of rows (x). The sever would respond, and then you would make another call for (x) more rows, save & repeat.
I want to display all products id, to check if it's correct. Next step is change stock, price and saving products
It would be easier to do this work in something like excel, just from a data entry standpoint, no one wants to edit thousands of rows on a web page and then have their session time out or something like that.
Hope that helps.

php running the process where it would be stop

I am working on a e-commerce site.
I want to read data from file (CSV or TXT). First time I read data from 1 to 1000 then the process is accidentally stopped. So second time read process should start from 1001.
Anyone please help me!
Option 1
Insert elements one by one to table like:
// take all data
$prods = json_decode(file_get_contents('products.php'));
while (!empty($prods)) {
// remove first element from array. Origin array is changed
$product = array_shift($prods);
// do your insert logic
$this->insert($product);
// for some reasons you don't insert all products
if ($this->break) {
break;
}
}
// write to file what is left, if any
file_put_contents('products.php', json_encode($prods));
Option 2
Sort products by ID and empty-loop while you don't reach your element:
foreach ($products as $product) {
if ($product->id < $lastId) {
continue;
}
$this->insert($product);
}

php fgetcsv multiple lines not only one or all

I wand to read biiiiig CSV-Files and want to insert them into a database. That already works:
if(($handleF = fopen($path."\\".$file, 'r')) !== false){
$i = 1;
// loop through the file line-by-line
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->executeInsert($data, $tableFields);
}
unset($dataRow);
}
$i++;
}
fclose($handleF);
}
My problem of this solution is, that it's very slow. But the files are too big to put it directly into the memory... So I wand to ask, if there a posibility to read, for example 10 lines, into the $dataRow array not only one or all.
I want to get a better balance between the memory and the performance.
Do you understand what i mean? Thanks for help.
Greetz
V
EDIT:
Ok, I still have to try to find a solution with the MSSQL-Database. My solution was to stack the data and than make a multiple-MSSQL-Insert:
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->setCurrentRow($i);
if(count($dataStack) > 210){
array_push($dataStack, $data);
#echo '<pre>', print_r($dataStack), '</pre>';
$this->executeInsert($dataStack, $tableFields, true);
// reset the stack
unset($dataStack);
$dataStack = array();
} else {
array_push($dataStack, $data);
}
unset($data);
}
$i++;
unset($dataRow);
}
}
Finaly I have to loop the Stack and build in mulitiple Insert in the method "executeInsert", to create a query like this:
INSERT INTO [myTable] (field1, field2) VALUES ('data1', 'data2'),('data2', 'datta3')...
That works much better. I still have to check the best balance, but therefor i can change only the value '210' in the code above. I hope that help's everybody with a similar problem.
Attention: Don't forget to execute the method "executeInsert" again after readin the complete file, because it could happen that there are still some data in the stack and the method will only be executed when the stack reach the size of 210....
Greetz
V
I think your bottleneck is not reading the file. Which is a text file. Your bottleneck is the INSERT in the SQL table.
Do something, just comment the line that actually do the insert and you will see the difference.
I had this same issue in the past, where i did exactly what you are doing. reading a 5+ million lines CSV and inserting it in a Mysql table. The execution time was 60 hours which is
unrealistic.
My solutions was switch to another db technology. I selected MongoDB and the execution time
was reduced to 5 minutes. MongoDB performs really fast on this scenarios and also have a tool called mongoimport that will allow you to import a csv file firectly from the command line.
Give it a try if the db technology is not a limitation on your side.
Another solution will be spliting the huge CSV file into chunks and then run the same php script multiple times in parallel and each one will take care of the chunks with an specific preffix or suffix on the filename.
I don't know which specific OS are you using, but in Unix/Linux there is a command line tool
called split that will do that for you and will also add any prefix or suffix you want to the filename of the chunks.

Categories