<?php
require('Classes\PHPExcel.php');
$phpExcel = PHPExcel_IOFactory::load('123.xlsx');
$writer = PHPExcel_IOFactory::createWriter($phpExcel, "Excel2007");
$sheet = $phpExcel ->getActiveSheet();
$x=1;
while ($x<=5){
if($sheet->getCell('A'.$x)->getValue()=="1"){
$sheet->SetCellValue('B'.$x, 'Something');
}
}
$writer->save('1234.xlsx');
?>
If I remove "while" and "if" lines... code is working perfectly and its getting completed in 1 second.
but this way, it can not complete process in 60 seconds and time over occurs.
123.xlsx has just A column and 5 numbers from 1 to 5. it is just for test but again its taking so long.
I still couldn't understand where im making mistake.
in normal 123.xlsx file will be around 800 rows and 20 columns, so it will take years :)
please help
Where are you incrementing the $i?
You might need to increment $i, based on your code requirement, you might increase $i in either while loop or inside if :
$x=1;
while ($x<=5){
if($sheet->getCell('A'.$x)->getValue()=="1")
{
$sheet->SetCellValue('B'.$x, 'Something');
$x++;//Increment here?
}
$x++;//orIncrement here?
}
Related
I'm very new to PHP, making errors and learning as I go. Please be gentle! :)
I want to access some data from Blizzard.com's API. For this particular data set, it's not a block of data in JSON, rather each object has it's own URL to access. I estimate that there are approx 150000 objects, however I don't know the start or end points of the number range. So I'm having to assume 1 and work past the highest number I know (269065)
To get the data, I need to access each object's data via a JSON file, which I read, get the contents of & drop in to a text file (this could be written as an insert in to a SQL db too, as I'm able to do this if it's the text file that's the issue). But to be honest, I would love to get to the bottom of why this is happening as much as anything!
I wasn't going to try and run ~250000 iterations in a for loop, I thought I'd try something I considered small, 2000.
The for loop starts with $a as 1, uses $a as part of the URL, loads & decodes the JSON, checks to see if the first field (ID) in the object is set, if it is, it writes a few fields to data.txt & if the first field (ID) isn't set it just writes $a to data.txt (so I know it's a null for other purposes not outlined here).
Simple! Or so I thought, after approx after 183 iterations, the data written to the text file goes awry as seen by the quote below. It is out of sequence and starts at 1 again, then back to 184 ad nauseam. The loop then seems to be locked in some kind of infinite loop of running, outputting in a random order until I close the page 10-20 minutes later.
I have obviously made a big mistake! But I have no idea what I have done wrong to have caused this. During my attempts I have rewritten the code with new variable names, so a new text does not conflict with code that could be running in memory.
I've tried resetting variables to blank at the end of the loop in case it something was being reused that was causing a problem.
If anyone could point out any errors in my code, or suggest something for me to look in to, to handle bigger loops that would be brilliant. I am assuming my issue may be a time out or memory problem. But I don't know where to start & was hoping I'd find some suggestions here.
If it's relevant, I am using 000webhostapp.com as my host provider for now, until I get some paid for hosting.
1 ... 182 183 1 184 2 3 185 4 186 5 187 6 188 7 189 190 8 191
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$file = fopen("data.txt","a");
fwrite($file,$data['id'].",'".$data['name']."'\n");
fclose($file);
} else {
$file = fopen("data.txt","a");
fwrite($file,$a."\n");
fclose($file);
}
}
The content of the file I'm trying to access is
{"id":33994,"name":"Precise Strikes","profession":"Enchanting","icon":"spell_holy_greaterheal"}
I scrapped the original plan and wrote this instead. Thank you again who took the time out of their day to help and offer suggestions!
$b = $mysqli->query("SELECT id FROM `static_recipes` order by id desc LIMIT 1;")->fetch_object()->id;
if (empty($b)) {$b=1;};
$count = $b+101;
$write = [];
for ($a = $b+1; $a < $count; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/".$a."?locale=en_GB&apikey=";
$contents = #file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = "(".$data['id'].",'".addslashes($data['name'])."','".addslashes($data['profession'])."','".addslashes($data['icon'])."')";
} else {
$write [] = "(".$a.",'a','a','a'".")";
}
}
$SQL = ('INSERT INTO `static_recipes` (id, name, profession, icon) VALUES '.implode(',', $write));
$mysqli->query($SQL);
$mysqli->close();
$write = [];
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/".$a."?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents,true);
if (isset($data['id'])) {
$write [] = $data['id'].",'".$data['name']."'\n";
} else {
$write [] = $a."\n";
}
}
$file = fopen("data.txt","a");
fwrite($file, implode('', $write));
fclose($file);
Also, why you are think what some IDS isn't duplicated at several "https://eu.api.battle.net/wow/[N]" urls data?
Also if you are I wasn't going to try and run ~250000 think about curl_multi_init(): http://php.net/manual/en/function.curl-multi-init.php
I can't really see anything obviously wrong with your code, can't run it though as I don't have the JSON
It could be possible that there is some kind of race condition since you're opening and closing the same file hundreds of times very quickly.
File operations might seem atomic but not necessarily so - here's an interesting SO thread:
Does PHP wait for filesystem operations (like file_put_contents) to complete before moving on?
Like some others' suggested - maybe just open the file before you enter the loop then close the file when the loop breaks.
I'd try it first and see if it helps.
There's nothing in your original code that would cause that sort of behaviour. PHP will not arbitrarily change the value of a variable. You are opening this file in append mode, are you certain that you're not looking at old data? Maybe output some debug messages as you process the data. It's likely you'd run up against some rate limiting on the API server, so putting a pause in there somewhere may improve reliability.
The only substantive change I'd suggest to your code is opening the file once and closing it when you're done.
$file = fopen("data_1_2000.txt", "w");
for ($a = 1; $a <= 2000; $a++) {
$json = "https://eu.api.battle.net/wow/recipe/$a?locale=en_GB&<MYPRIVATEAPIKEY>";
$contents = file_get_contents($json);
$data = json_decode($contents, true);
if (!empty($data['id'])) {
$data["name"] = str_replace("'", "\\'", $data["name"]);
$record = "$data[id],'$data[name]'";
} else {
$record = $a;
}
fwrite($file, "$record\n");
sleep(1);
echo "$a "; if ($a % 50 === 0) echo "\n";
}
fclose($file);
I am using php loop in order to get to know any file changes. If the file changes occur , I am trying to stop the loop. But in my condition instead of stopping ..loop continues.
Route::get('api.chat.buffer/{job_id}',function($job_id){
$award= DB::table('job_awards')->where('job_id',$job_id)->first();
$dirname = base_path().'/files/chat/';
$filename = $award->client_id.'_'.$award->user_id.'.timelog.log';
$i =1;
for($z=0;$z<=20;$z++){
$current_file_time = filemtime($dirname.$filename);
if($_GET['timestamp']<$current_file_time){
echo json_encode(array('status'=>'success','data'=>$current_file_time,'node'=>1)); die;
break;
}
sleep(1); // this should halt for 3 seconds for every loop
}
echo json_encode(array('status'=>'success','data'=>$current_file_time,'node'=>0));
die;
});
I am creating chatting script and tracking changing while file change.
Thanks
can you use sleep function inside a for loop?. Have you tried with yield. Try assigning the numbers of loops statements to break; in this case it would be break 1; http://php.net/manual/en/control-structures.break.php
I am developing a PHP application where I need to fetch 5 random email addresses from a CSV file and send to user.
I already worked with CSV file many times but don't know how to fetch randomly in limit.
NOTE: CSV file have more than 200k emails.
Any one have a idea or suggestion then please send me.
If CSV is too big and won't be saved in a DB
You'll have to loop through all of the rows in the CSV once to count them.
You'll have to call a random-number generator function (rand, mt_rand, others...) and parametrize it to output numbers from 0 to $count, and call it 5 times (to get 5 numbers).
You'll have to loop through all of the rows in the CSV again and only copy the necessary information for the rows whose number matches the randomly generated values.
Nota bene: don't use file_get_contents with str_getcsv. Instead use fopen with fgetcsv. The first approach loads the entire file to memory which we don't want to do. The second approach only read the file line-by-line.
If the CSV is too big and will be saved in a DB
Loop through the CSV rows and insert each record into the DB.
Use a select query with LIMIT 5 and ORDER BY RAND().
If the CSV is small enough to fit into memory
Loop through the CSV rows and create an array holding all of them.
You'll have to call a random-number generator function (rand, mt_rand, others...) and parametrize it to output numbers from 0 to array count, and call it 5 times (to get 5 numbers).
Then retrieve the rows from the big array by their index number -- using the randomly generated numbers as indexes.
If csv file is not too big you can load whole file to array to get something like
e[0] = 'someone1#somewhere.com';
e[1] = 'someone2#somewhere.com';
e[2] = 'someone3#somewhere.com';
then you can pick random email by e[rand(0, sizeof(e))];
and do this 5 times (with check for double items)
Read all emails from CSV then select random 5 email from email array.
To select 5 random number use array_rand function.
$email = array('test#test.com','test2#test.com','test3#test.com','test4#test.com','test5#test.com');
$output = array_rand($email, 5);
print_r($email); // will return random 5 email.
for large number try to use something like
$max = count($email);
$email_rand = array();
for ($i =0; $i<5; $i++)
{
$a = mt_rand(0, $max);
$email_rand[] = $email[$a];
}
print_r($email_rand);
<?php
$handle = fopen('test.csv', 'r');
$csv = fgetcsv($handle);
function randomMail($key)
{
global $csv;
$randomMail = $csv[$key];
return $randomMail;
}
$randomKey = array_rand($csv, 5);
print_r(array_map('randomMail', $randomKey));
This is small utility to achieve the thing you expect and change the declaration of randomMail function as you desired.
for($i=0;$i<5;$i++)
{
$cmd = "awk NR==$(($"."{RANDOM} % `wc -l < ~/Downloads/email.csv` + 1)) ~/Downloads/email.csv >> listemail.txt";
$rs = exec($cmd);
}
after you read list mail from listmail.txt
I wand to read biiiiig CSV-Files and want to insert them into a database. That already works:
if(($handleF = fopen($path."\\".$file, 'r')) !== false){
$i = 1;
// loop through the file line-by-line
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->executeInsert($data, $tableFields);
}
unset($dataRow);
}
$i++;
}
fclose($handleF);
}
My problem of this solution is, that it's very slow. But the files are too big to put it directly into the memory... So I wand to ask, if there a posibility to read, for example 10 lines, into the $dataRow array not only one or all.
I want to get a better balance between the memory and the performance.
Do you understand what i mean? Thanks for help.
Greetz
V
EDIT:
Ok, I still have to try to find a solution with the MSSQL-Database. My solution was to stack the data and than make a multiple-MSSQL-Insert:
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->setCurrentRow($i);
if(count($dataStack) > 210){
array_push($dataStack, $data);
#echo '<pre>', print_r($dataStack), '</pre>';
$this->executeInsert($dataStack, $tableFields, true);
// reset the stack
unset($dataStack);
$dataStack = array();
} else {
array_push($dataStack, $data);
}
unset($data);
}
$i++;
unset($dataRow);
}
}
Finaly I have to loop the Stack and build in mulitiple Insert in the method "executeInsert", to create a query like this:
INSERT INTO [myTable] (field1, field2) VALUES ('data1', 'data2'),('data2', 'datta3')...
That works much better. I still have to check the best balance, but therefor i can change only the value '210' in the code above. I hope that help's everybody with a similar problem.
Attention: Don't forget to execute the method "executeInsert" again after readin the complete file, because it could happen that there are still some data in the stack and the method will only be executed when the stack reach the size of 210....
Greetz
V
I think your bottleneck is not reading the file. Which is a text file. Your bottleneck is the INSERT in the SQL table.
Do something, just comment the line that actually do the insert and you will see the difference.
I had this same issue in the past, where i did exactly what you are doing. reading a 5+ million lines CSV and inserting it in a Mysql table. The execution time was 60 hours which is
unrealistic.
My solutions was switch to another db technology. I selected MongoDB and the execution time
was reduced to 5 minutes. MongoDB performs really fast on this scenarios and also have a tool called mongoimport that will allow you to import a csv file firectly from the command line.
Give it a try if the db technology is not a limitation on your side.
Another solution will be spliting the huge CSV file into chunks and then run the same php script multiple times in parallel and each one will take care of the chunks with an specific preffix or suffix on the filename.
I don't know which specific OS are you using, but in Unix/Linux there is a command line tool
called split that will do that for you and will also add any prefix or suffix you want to the filename of the chunks.
I'm using PHPExcel 1.7.8, PHP 5.4.14, Windows 7, and an Excel 2007 spreadsheet. The spreadsheet consists of 750 rows, columns A through BW, and is about 600KB in size. This is my code for opening the spreadsheet--pretty standard:
//Include PHPExcel_IOFactory
include 'PHPExcel/IOFactory.php';
include 'PHPExcel.php';
$inputFileName = 'C:\xls\lspimport\GetLSP1.xlsx';
// Read your Excel workbook
try {
$inputFileType = PHPExcel_IOFactory::identify($inputFileName);
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
$objReader->setReadDataOnly(true);
$objPHPExcel = $objReader->load($inputFileName);
} catch(Exception $e) {
die('Error loading file "'.pathinfo($inputFileName,PATHINFO_BASENAME).'": '.$e->getMessage());
}
//set active worksheet
$objWorksheet = $objPHPExcel->setActiveSheetIndexbyName('Sheet1');
$j = 0;
for($i = 2; $i < 3; $i++)
{
...
}
In the end, I eventually want to loop through each row in the spreadsheet, but for the time being while I perfect the script, I'm only looping through one row. The problem is, it takes 30 minutes for this script to execute. I echo'd messages after each section of code so I could see what was being processed and when, and my script basically waits for 30 minutes at this part:
$objPHPExcel = $objReader->load($inputFileName);
Have a configured something incorrectly? I can't figure out why it takes 30 minutes to load the spreadsheet. I appreciate any and all help.
PHPExcel has a problem with identifying where the end of your excel file is. Or rather, Excel has a hard time knowing where the end of itself is. If you touch a cell at A:1000000 it thinks it needs to read that far.
I have done 2 things in the past to fix this:
1) Cut and past the data you need into new excel file.
2) Specify the exact dimensions you want to read.
Edit How to do option 2
public function readExcelDataToArray($excelFilePath, $maxRowNumber=-1, $maxColumnNumber=-1)
{
$objPHPExcel = PHPExcel_IOFactory::load($excelFilePath);
$objWorksheet = $objPHPExcel->getActiveSheet();
//Get last row and column that have data
if ($maxRowNumber == -1){
$lastRow = $objWorksheet->getHighestDataRow();
} else {
$lastRow = $maxRowNumber;
}
if ($maxColumnNumber == -1){
$lastCol = $objWorksheet->getHighestDataColumn();
//Change Column letter to column number
$lastCol = PHPExcel_Cell::columnIndexFromString($lastCol);
} else {
$lastCol = $maxColumnNumber;
}
//Get Data Array
$dataArray = array();
for ($currentRow = 1; $currentRow <= $lastRow; $currentRow++){
for ($currentCol = 0; $currentCol <= $lastCol; $currentCol++){
$dataArray[$currentRow][$currentCol] = $objWorksheet->getCellByColumnAndRow($currentCol,, $currentRow)->getValue();
}
}
return $dataArray;
}
Unfortunately these solutions aren't very dynamic.
Note that a modern excel file is really just a zip with an xlsx extension. I have written extensions to PHPExcel that unzip them, and modify certain xml files to get the kinds of behaviors I want.
A third suggestion for you would be to monitor the contents of each row and stop when you get an empty one.
Resolved (for me) - see note at bottom of this post
I'm trying to use pretty much identical code on a dedicated quad core server with 16GB of RAM, also running similar versions - PHPExcel 1.7.9 and PHP 5.4.16
Just creating an empty reader takes 50 seconds!
// $inputFileType is 'Excel5';
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
Loading the spreadsheet (1 sheet, 2000 rows, 25 columns) I want to process (readonly) then takes 1802 seconds.
$objReader->setReadDataOnly(true);
$objPHPExcel = $objReader->load($inputFileName);
Of the various types of reader I consistently get timings for instantiation as shown below
foreach(array(
'Excel2007', // 350 seconds
'Excel5', // 50 seconds
'Excel2003XML', // 50 seconds
'OOCalc', // 50 seconds
'SYLK', // 50 seconds
'Gnumeric', // 50 seconds
'HTML', // 250 seconds
'CSV' // 50 seconds
) as $inputFileType) {
$objReader = PHPExcel_IOFactory::createReader($inputFileType);
}
Peak memory usage was about 8MB... far less than the 250MB the script has available to it.
My suspicion WAS that PHPExcel_IOFactory::createReader($inputFileType) was calling something within a loop that's extremely slow under PHP 5.4.x ?
However the excessive time was due to how PHPExcel names its class names and corresponding file structure. It has an autoloader that converts class names such as *PHPExcel_abc_def* into PHPExcel/abc/def.php for the require statement. Although we had PHPExcel's class directory defined in our include path, our own (already defined) autoloader couldn't handle the manipulation from class name to file name required (it was looking for *PHPExcel_abc_def.php*). When a class file cannot be included, our autoloader will loop 5 times with a 10 second delay to see if the file is being updated and so might become available. So for every PHPExcel class that needed to be loaded we were introducing a delay of 50 seconds before hitting PHPExcel's own autoloader which required the file in fine.
Now that I've got that resolved PHPExcel is proving to be truly awesome.
I'm using the latest version of PHPExcel (1.8.1) in a Symfony project, and I also ran into time delays when using the $objReader->load($file) method. The time delays were not due to an autoloader, but to the load method itself. This method actually reads every cell in every worksheet. And since my data worksheet was 30 columns wide by 5000 rows, it took about 90 seconds to read all this information on my ancient work computer.
I assumed that the real loading/reading of cell values would occur on the fly as I requested them, but it looks like short of a pretty major re-write of the PHPExcel code, there's no real way around this initial load time delay.
If you know your file is a pretty plain excel file, you can do manual reading. A .xslx file is just a zip archive with the spreadsheet values and structure stored into xml files. This script took me from the 60 seconds used on PHPExcel down to 0.18 seconds.
$zip = new ZipArchive();
$zip->open('path_to/file.xlsx');
$sheet_xml = simplexml_load_string($zip->getFromName('xl/worksheets/sheet1.xml'));
$sheet_array = json_decode(json_encode($xml), true);
$values = simplexml_load_string($zip->getFromName('xl/sharedStrings.xml'));
$values_array = json_decode(json_encode($values), true);
$end_result = array();
if ($sheet_array['sheetData']) {
foreach ($sheet_array['sheetData']['row'] as $r => $row) {
$end_result[$r] = array();
foreach ($row['c'] as $c => $cell) {
if (isset($cell['#attributes']['t'])) {
if ($cell['#attributes']['t'] == 's') {
$end_result[$r][] = $values_array['si'][$cell['v']]['t'];
} else if ($cell['#attributes']['t'] == 'e') {
$end_result[$r][] = '';
}
} else {
$end_result[$r][] = $cell['v'];
}
}
}
}
Result:
Array
(
[0] => Array
(
[0] => A1
[1] => B1
[2] => C1
)
[1] => Array
(
[0] => A2
[1] => B2
[2] => C2
)
)
This is error prone and not optimized, but it works and illustrates the basic idea. If you know your file, then you can make reading very fast. If you allow users to input the files, then you should maybe avoid it - or at least do the neccessary checks.