I have this issue where cron runs a php script every 5 minutes to update a list.
However, the list fails to update 5% of the time, and the list ends up blank. I don't believe it's related to cron, because I think I failed to manually generate the list twice out of like 100 tries.
What I believe it's related to is when the site has like 50+ people on it, it will fail to generate, perhaps being related to the server being busy. I added a check to make sure it's not MySQL not returning rows (which seems impossible) but it still does it leads me to believe fwrite is failing.
<?
$fileHandle = fopen("latest.html", 'w');
$links = array();
$query1 = $db_conn -> query("SELECT * FROM `views` ORDER BY `date` DESC LIMIT 0,20");
while ($result1 = $db_conn -> fetch_row($query1))
{
$result2 = $db_conn -> fetch_query("SELECT * FROM `title` WHERE `id` = '" . $result1['id'] . "'");
array_push($links, "<a href='/title/" . $result2['title'] . "'>" . $result2['title'] . "</a>");
}
if (count($links) > 0)
fwrite($fileHandle, implode(" • ", $links));
else
echo "Didn't work!";
fclose($fileHandle);
?>
Could there be a slight chance the file is in use so it ends up not working and writing a blank list?
$fileHandle = "latest.html", 'w');
I'm going to assume you mean
$fileHandle = fopen("latest.html", 'w');
the 'w' here opens the file, places the cursor at the start and truncates the file to zero length.
If you check count($links) before doing this you wont truncate the file when there is nothing to be written to it.
<?php
$links = "QUERY HERE AND HANDLE THE RESULTS (REMOVED)";
if (count($links) > 0)
{
$fileHandle = fopen("latest.html", 'w');
fwrite($fileHandle, implode(" • ", $links));
fclose($fileHandle);
}
else
{
echo "Didn't work!";
}
?>
Could there be a slight chance the file is in use so it ends up not
working and writing a blank list?
Well, yes. We don't know what other code you run that manipulates latest.html, so we can't really profile it.
Here are some suggestions:
Fix the syntax error in your file handler creation
You can acquire a fopen('w') handler to a file that has an existing fopen('r') process going on, so be sure to use PHP's flock while writing to the file to ensure other processes don't corrupt your list
Check to see what your logs have to say
Write to a string, then fwrite the entire string, so you spend less time in your inner loop with your file handler open (especially in this case where it doesn't eem that the string would be that long -- list of links)
Try outputting your links (datetamped) to a separate file besides latest.html; in the 5% chance when it fails, look back at the timetamped links and see how they compare. You can also include your query in that file so you can isolate if the issue is somthing to do with the DB or to do with writing to latest.html -- this will be especially useful in the case where your query (which isn't shown) possibly returns no results.
I think you are leaving yourself open to the possibility that the query is returning no data. The "removed" logic from your example may help shed light on what's going on. A good way of figuring this out is to write something to a log file, and check that log file after a few dozen iterations of your script. In the interest of having something in your latest.html file, I'd use file_put_contents over your current code.
<?php
$links = array();
$query = "SELECT links FROM tableA";
$result = mysql_query($links);
while ($row = mysql_fetch_row($result)) {
$links[] = $row[0];
}
if (count($links) > 0) {
file_put_contents('latest.html', implode(" * ", $links));
file_put_contents('linkupdate.log', "got links: " . count($links) . "\n", FILE_APPEND);
} else {
file_put_contents('linkupdate.log', "No links? [(" . mysql_errno() . ") " . mysql_error() . "]\n", FILE_APPEND);
}
?>
If we find no links, we won't overwrite the previous data file. If we encounter a MySQL error that might be causing the problem, it'll show up in the log output.
A read on the file shouldn't block a write, but switching to file_put_contents will help reduce the time the file is open and empty (there is some latency while you're performing the query and fetching the results).
Feel free to anonymize your query and post that as well - you definitely could have a problem with the result set since your code otherwise seems like it ought to work.
Related
Some users have bad WiFi connection when using our system because of geography or internet provider issue. When some user save some product info, i found some data(field) not 100% save into DB. And this is very difficult to me to reproduce the problem because my connection is good.
So my question is how to let server know that i already send all field to process before save/update query execute ?
case "save":
for ($i=0, $n=sizeof($languages); $i<$n; $i++)
{
$language_id = $languages[$i]['id'];
$name = preg_replace( '/\s+/', ' ', tep_db_prepare_input($_POST["products_name"][$language_id]));
$description = tep_db_prepare_input(str_replace(" ", " ", $_POST["products_description"][$language_id]));
$extra_info = tep_db_prepare_input(str_replace(" ", " ", $_POST["products_extra_info"][$language_id]));
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "' . $pID . '" AND language_id="'.$language_id.'"');
// update other language that empty
$sql_data_array = array('products_name' => $name);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_name = ""');
$sql_data_array = array('products_description' => $description);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_description = ""');
$sql_data_array = array('products_extra_info' => $extra_info);
tep_db_perform(TABLE_PRODUCTS_DESCRIPTION, $sql_data_array, 'update', 'products_id = "'.$pID.'" and products_extra_info = ""');
}
... etc
Using database transaction you can prevent these errors.
When something goes wrong no update will be performed to the DB.
Note that you can't use transactions on MyISAM tables, you need to use the InnoDB Engine
You need to use a transaction to perform multiple DB queries. The advantage is that if one of the query fails in the execution, all of the previous that were executed get undone in the database. That way you don't have any corrupted or incomplete data in your DB.
Apart from the obvious as stated by #Mark Baker, you could use PHP's connection_aborted() function to detect when a user disconnects. Here's an excerpt from the comments section:
--
A trick to detecting if a connection is closed without having to send data that will otherwise corrupt the stream of data (like a binary file) you can use a combination of chunking the data on HTTP/1.1 by sending a "0" ("zero") as a leading chunk size without anything else.
NOTE it's important to note that it's not a good idea to check the stream more then once every few seconds. By doing this you are potentially increasing the data sent to the user with no gain to the user.
A good reason to do it this way is if you are generating a report that takes a long time to run and takes a lot of server resources. This would allow the server to detect if a user canceled the download and do any cleanup without corrupting the file file being download.
Here is an example:
<?php
ignore_user_abort(true);
header('Transfer-Encoding:chunked');
ob_flush();
flush();
$start = microtime(true);
$i = 0;
// Use this function to echo anything to the browser.
function vPrint($data){
if(strlen($data))
echo dechex(strlen($data)), "\r\n", $data, "\r\n";
ob_flush();
flush();
}
// You MUST execute this function after you are done streaming information to the browser.
function endPacket(){
echo "0\r\n\r\n";
ob_flush();
flush();
}
do{
echo "0";
ob_flush();
flush();
if(connection_aborted()){
// This happens when connection is closed
file_put_contents('/tmp/test.tmp', sprintf("Conn Closed\nTime spent with connection open: %01.5f sec\nLoop itterations: %s\n\n", microtime(true) - $start, $i), FILE_APPEND);
endPacket();
exit;
}
usleep(50000);
vPrint("I get echo'ed every itteration (every .5 second)<br />\n");
}while($i++ < 200);
endPacket();
?
Note: This line ignore_user_abort(true); allows the script to continue running in the background after the user disconnects, without it (i.e. by default) the PHP process is killed instantly. This is why using transactions will solve your problem as the transaction that was started, never completed.
What I am trying to do is query a database and write a file with the contents of the columns. The code below works, but I have some questions...
$Query = mysqli_query($connect, "SELECT * FROM table WHERE column = '". $variable ."' LIMIT 1");
while ($Var = $Query->fetch_assoc()) {
$string = '<?php
$one = "'. $Var['Column1'] .'";
$two = "'. $Var['Column2'] .'";
$three = "'. $Var['Column3'] .'";
?>';
$fp = fopen("validate.php", "w");
echo fwrite($fp, $string);
fclose($fp);
} // while loop for license query
1) Is there a way to find out if the fopen/fwrite succeeds or fails? Not sure if there is any error handling with this. I want to display a success message or fail message on the page once it processes. I know I can do this based on the mysqli statement, but wanted to find out if I could do this based on the fopen/fwrite.
2) When I visit the page it is located on..it is outputting '80' on the page. This is the only code on the page. It is getting the data from the table, writing the file, and then just displaying "80" on the page. Why does it do this? I'm guessing this is related to the fopen/fwrite script...but not exactly sure.
Any suggestions on improvements are greatly appreciated.
In php, fopen returns a FALSE if the file is not opened.
You are outputting 80 because you are echoing the result of your fwrite, which results the number of bytes written or FALSE on error.
When I run my script I receive the following error before processing all rows of data.
maximum execution time of 30 seconds
exceeded
After researching the problem, I should be able to extend the max_execution_time time which should resolve the problem.
But being in my PHP programming infancy I would like to know if there is a more optimal way of doing my script below, so I do not have to rely on "get out of jail cards".
The script is:
1 Taking a CSV file
2 Cherry picking some columns
3 Trying to insert 10k rows of CSV data into a my SQL table
In my head I think I should be able to insert in chunks, but that is so far beyond my skillset I do not even know how to write one line :\
Many thanks in advance
<?php
function processCSV()
{
global $uploadFile;
include 'dbConnection.inc.php';
dbConnection("xx","xx","xx");
$rowCounter = 0;
$loadLocationCsvUrl = fopen($uploadFile,"r");
if ($loadLocationCsvUrl <> false)
{
while ($locationFile = fgetcsv($loadLocationCsvUrl, ','))
{
$officeId = $locationFile[2];
$country = $locationFile[9];
$country = trim($country);
$country = htmlspecialchars($country);
$open = $locationFile[4];
$open = trim($open);
$open = htmlspecialchars($open);
$insString = "insert into countrytable set officeId='$officeId', countryname='$country', status='$open'";
switch($country)
{
case $country <> 'Country':
if (!mysql_query($insString))
{
echo "<p>error " . mysql_error() . "</p>";
}
break;
}
$rowCounter++;
}
echo "$rowCounter inserted.";
}
fclose($loadLocationCsvUrl);
}
processCSV();
?>
First, in 2011 you do not use mysql_query. You use mysqli or PDO and prepared statements. Then you do not need to figure out how to escape strings for SQL. You used htmlspecialchars which is totally wrong for this purpose. Next, you could use a transaction to speed up many inserts. MySQL also supports multiple interests.
But the best bet would be to use the CSV storage engine. http://dev.mysql.com/doc/refman/5.0/en/csv-storage-engine.html read here. You can instantly load everything into SQL and then manipulate there as you wish. The article also shows the load data infile command.
Well, you could create a single query like this.
$query = "INSERT INTO countrytable (officeId, countryname, status) VALUES ";
$entries = array();
while ($locationFile = fgetcsv($loadLocationCsvUrl, ',')) {
// your code
$entries[] = "('$officeId', '$country', '$open')";
}
$query .= implode(', ', $enties);
mysql_query($query);
But this depends on how long your query will be and what the server limit is set to.
But as you can read in other posts, there are better way for your requirements. But I thougt I should share a way you did thought about.
You can try calling the following function before inserting. This will set the time limit to unlimited instead of the 30 sec default time.
set_time_limit( 0 );
I'm converting a site from MySQL to Postgres and have a really weird bug. This code worked as-is before I switched the RDBMS. In the following loop:
foreach ($records as $record) {
print "<li> <a href = 'article.php?doc={$record['docid']}'> {$record['title']} </a> by ";
// Get list of authors and priorities
$authors = queryDB($link, "SELECT userid FROM $authTable WHERE docid='{$record['docid']}' AND role='author' ORDER BY priority");
// Print small version of author list
printAuthors($authors, false);
// Print (prettily) the status
print ' (' . nameStatus($record['status']) . ") </li>\n";
}
the FIRST query is fine. Subsequent calls don't work (pg_query returns false in the helper function, so it dies). The code for queryDB is the following:
function queryDB($link, $query) {
$result = pg_query($link, $query) or die("Could not query db! Statement $query failed: " . pg_last_error($link));
// Push each result into an array
while( $line = pg_fetch_assoc($result)) {
$retarray[] = $line;
}
pg_free_result($result);
return $retarray;
}
The really strange part: when I copy the query and run it with psql (as the same user that PHP's connecting with) everything runs fine. OR if I copy the meat of queryDB into my loop in place of the function call, I get the correct result. So how is this wrapper causing bugs?
Thanks!
I discovered that there was no error output due to having my php.ini misconfigured; after turning errors back on I started getting output, I got things like18 is not a valid PostgreSQL link resource. Changing my connect code to use pg_pconnect() (the persistent version) fixed this. (Found this idea here.)
Thanks to everyone who took a look and tried to help!
I want to insert about 50,000 mysql query for 'insert' in mysql db,
for this i have 2 options,
1- Directly import the (.sql) file:
Following error is occur
" You probably tried to upload too large file. Please refer to documentation for ways to workaround this limit. "
2- Use php code to insert these queries in form of different chunks from the (.sql) file.
here is my code:
<?php
// Configure DB
include "config.php";
// Get file data
$file = file('country.txt');
// Set pointers & position variables
$position = 0;
$eof = 0;
while ($eof < sizeof($file))
{
for ($i = $position; $i < ($position + 2); $i++)
{
if ($i < sizeof($file))
{
$flag = mysql_query($file[$i]);
if (isset($flag))
{
echo "Insert Successfully<br />";
$position++;
}
else
{
echo mysql_error() . "<br>\n";
}
}
else
{
echo "<br />End of File";
break;
}
}
$eof++;
}
?>
But memory size error is occur however i have extend memory limit from 128M to 256M or even 512M.
Then i think that if i could be able to load a limited rows from (.sql) file like 1000 at a time and execute mysql query then it may be import all records from file to db.
But here i dont have any idea for how to handle file start location to end and how can i update the start and end location, so that it will not fetch the previously fetched rows from .sql file.
Here is the code you need, now prettified! =D
<?php
include('config.php');
$file = #fopen('country.txt', 'r');
if ($file)
{
while (!feof($file))
{
$line = trim(fgets($file));
$flag = mysql_query($line);
if (isset($flag))
{
echo 'Insert Successfully<br />';
}
else
{
echo mysql_error() . '<br/>';
}
flush();
}
fclose($file);
}
echo '<br />End of File';
?>
Basically it's a less greedy version of your code, instead of opening the whole file in memory it reads and executes small chunks (one liners) of SQL statements.
Instead of loading the entire file into memory, which is what's done when using the file function, a possible solution would be to read it line by line, using a combinaison of fopen, fgets, and fclose -- the idea being to read only what you need, deal with the lines you have, and only then, read the next couple of ones.
Additionnaly, you might want to take a look at this answer : Best practice: Import mySQL file in PHP; split queries
There is no accepted answer yet, but some of the given answers might already help you...
Use the command line client, it is far more efficient, and should easily handle 50K inserts:
mysql -uUser -p <db_name> < dump.sql
I read recently about inserting lots of queries into a database to quickly. The article suggested using the sleep() (or usleep) function to delay a few seconds between queries so as not to overload the MySQL server.