I am struggling to understand why PHP is faulting without feedback or error (beyond windows error log, fault in php5 faulting module php) whilst executing the following block of code:
$projects = array();
$pqRes = $connection->getResult("SELECT big query");
//build an array by project id
while($record = sqlsrv_fetch_array($pqRes))
{
if(! array_key_exists($record['ProjectID'],$projects))
{
$projects[$record['ProjectID']] = array();
}
$projects[$record['ProjectID']][] = $record; //this line faults php after about 9100 records
}
The outcome is the same whether objects or arrays are pulled from the sql resource, and the offending line is the array assignment.
The assignment causes a fault in php after around 9100 records.
If this loop was counted out so execution termination is controlled I can see that php has consumed about 25mb of memory, by its configuration it is allowed 256.
The faulting record is not always the same, it can vary by 3 or 4 indexes.
The code is in fact quite pointless but in a round about way groups records of the same productID, but I am very interested to know what it could do that might cause php to die so suddenly.
Thanks for your time.
I am not sure what you mean by a fault. But if it means it makes the script terminate, I suggest enabling error log and see what the final error message says. In any case, it may be a PHP bug, so you are always recommended to report it to http://bugs.php.net/, as that is the right place where PHP developers look at eventual bugs.
Related
In a PHP script I do a MySQL Request where $SQL is a hardcoded SQL-SELECT statement:
$Result = mysqli_query($server,$SQL);
echo '<br>'.$SQL.'*'.mysqli_num_rows($Result);
#die('test');
The whole script is quite complex but running the code above prints the SQL-Statement and delivers 14 Result rows.
As I wanted to check the database directly at this point I added the above die() after the request. But doing so the SQL-satement now is delivering just 13 Result rows.
I know that cutting the code after the execution may change the state of the system as probable database operations in the cut off code won't come into effect anymore. But no matter if in the run before the die() has been active or not I always get 14 Results without a die() and always 13 Results with a die().
So my problem is: everytime when I run the code without the die() and and directly afterwards run the code again with the die() activated then until the die() statement there is no obvious difference in the code or the state of the database so the SELECT statement should always deliver the same number of rows... which it doesn't.
Can anyone think of a setting which makes this behavior understandable? I hope my explanation is not to wierd - otherwise I am happy to answer any questions. There is obviously a simple explanation which only I seem to miss...
Edit:
The problem I have is probably a bug I have hidden in a large piece of code. This surely is hard to answer especially if you have not got the full code. But maybe it helps if I reformulate my question to the following task:
Can you program a PHP code including the above snippet which shows the same behavior - so after each run (activated or not) it always delivers 14 Results with the die() deactivated and 13 runs with the die() activated? - of course allowing the sourcecode to analyze itself would be cheating...
Edit 2:
I found the reason of the Error. It is because the printing of PHP notices and warning in the code which accumulated during development and which in Firefox seem to lead to a problem if they reach a certain size before the <head> section. The die() case causes less of these because it breaks earlier and in fact doesn't even reach the <head>. If I mute notices both examples behave the same. What exactly lead to the error then I haven't examined... Sorry that I did not hint the error reporting in describing my question, but I had no clue that that might be the reason - especially as it was active in both cases....
Not an answer, just an example of what I mean by "simple, self-contained (,correct) example".
<?php
$server = setup();
$SQL = 'SELECT id,x,y,z FROM soFoo';
$Result = mysqli_query($server,$SQL);
echo '<br>'.$SQL.'*'.mysqli_num_rows($Result);
#die('test');
function setup() {
$server = new mysqli('localhost', 'localonly', 'localonly', 'test');
if ( $server->connect_errno ) {
die('!'.$server->connect_error);
}
$server->query('CREATE TEMPORARY TABLE soFoo (
id int auto_increment,
x char(255),
y char(255),
z char(255),
primary key(id)
)
') or die('!'.$server->error);
$stmt=$server->prepare('INSERT INTO soFoo (x,y,z) VALUES (?,?,?)')
or die('!'.$server->error);
$stmt->bind_param('sss', $x, $y, $z) or die('!'.$stmt->error);
foreach( range('a','n') as $x) { // fill in 14 records
$z = $y = $x;
$stmt->execute() or die('!'.$stmt->error);
}
return $server;
}
you just have to copy&paste it, adjust the MySQL connection data (credentials, host, database) and then run it.
In my case it prints <br>SELECT id,x,y,z FROM soFoo*14 as expected (since the setup() function added 14 records to the temporary table) regardless of whether the die() was there or not. So this example doesn't behave like your description, yet it contains the information (and the code snippet) you've provided. Something must be different on your side.
I've tested the script with php5.6.10 (win32) and MySQL 5.6.22-log (win32).
I am having a script which analyses XML data and fills same arrays with information.
For some (huge) input, the script crashed.
There is a foreach loop which is run around 180 times without problems (memory_get_usage() in iteration 180 around 20 MB, each loop adds around 0.1 MB)
Then it happens that with each new loop, the memory usage just doubles.
With the use of lots of logging I was able to track the problem down to the following line in a foreach.
$fu = $f['unit']
$f has the following structure:
array (
'name' => 'Test',
'value' => '4',
'unit' => 'min-1',
)
But in some (many) cases (but also before the 180th iteration), the key unit was not existing in the array.
I was able to eliminate the problem by replacing the line with:
$fu = (isset($f['unit']) ? $f['unit'] : '');
Then the iteration runs until finished (totally 370 iterations).
Is there any explanation for the phenomena?
PHP version: PHP 5.3.3-1ubuntu9.10 with Suhosin-Patch (old...)
Your problem might come from the PHP error handler and not from your actual loop.
like you said, not every "unit" key is existing and will therefore rise an error (or Exception depending on your error handlers). This might also include a stack trace and further debugging information depending on what extensions (xdebug?) you installed.
Both will consume memory.
It's allways a good practice to check for variables existance before using it. Allways enable E_NOTICE errors in your development system to see any such problems.
I'm wondering what errors are considered fatal vs. not in PHP (though interested in other languages too). Is there a concise explanation and/or listing of each somewhere of error types? Does using the expression "non-fatal" even make sense?
The reason I'm wondering is because sometimes when I make PHP errors my $_SESSION (actually using codeigniter sessions) is destroyed whereas in other cases it is not and I can't quite put my finger on why this is happening.
Well, the naming is pretty self-explanatory:
Fatal errors are critical errors and it means that the parser cannot possibly continue to parse the rest of your code, because of this error. For example:
Your webserver has run out of memory to parse the script (e.g. parser hit the memory_limit set in php.ini).
The script contains an infinite loop (e.g. while(1) { echo "Hi friend!"; } and runs longer than the set max_execution_time in your php.ini).
Non-fatal errors are usually called Warnings, they are still pretty serious and should be fixed, but do not cause the parser to stop parsing your code, it can still continue, regardless of the error that occurred. For example:
You are calling unset variables.
You are requesting a key in an array that does not exist.
You are calling an non-existing function.
Hope this clears things up a bit for you.
Like most of our code base, our mysql handling functions are custom built.
They work very well and include a number of logging forks.
A simplified version of our query execution function looks like this:
if(!$result=mysql_query($query)){
file_put_contents(QUERYLOG,'Query '.$query.' failed execution');
}
This is overly simplified, but you get the basic idea: If queries fail, they will be logged to a separate query log.
This is a great way of keeping track of any queries that need to be looked at.
My question is as follows:
With the above, the tiny problem is that if a query fails both our query log, AND our php log will be stamped with the error as a mysql_query (... or mysql_connect, mysql_select_db, etc ...) will produce a php error.
What we want to do is surpress the php error via:
.... $result=#mysql_query($query ....
So, as far as the question goes:
Does using the # error suppression mechanism in php cause any performance impacts if no error is produced? Or does it only affect performance if an error is produced?
I know I know, micro optimization, but as you can guess, or query execution function is used millions of times a day, so even a small performance hit is worth examining.
made a little "research"
$s = microtime(true);
$a = array('1','2');
$b = $a[1];
echo microtime(true)-$s;
gives 1.1205673217773E-5
and if i use $b = #$a[1]; i get a bit more: 1.5974044799805E-5
so: yes, there is a difference, but no, you should not bother.
I have a web application that runs fine on our Linux servers but when running on Mac OS with the Zend Community Edition Server using PHP 5.3 we get the error:
usort(): Array was modified by the user comparison function
every time a page loads for the first time (it takes about 2 minutes for a page to tick over and load, on the linux servers the page loads in 1 second).
Has anyone else experienced this or has any idea how I can fix the problem, I have tried playing around with PHP and Apache memory settings with no luck.
There is a PHP bug that can cause this warning, even if you don't change the array.
Short version, if any PHP debug functions examine the sort array, they will change the reference count and trick usort() into thinking you've changed the data.
So you will get that warning by doing any of the following in your sort function (or any code called from it):
calling var_dump or print_r on any of the sort data
calling debug_backtrace()
throwing an exception -- any exception -- or even just creating an exception
The bug affects all PHP 5 versions >= 5.2.11 but does not affect PHP >= 7. See the bug report for further details.
As far as I can see, the only workaround is either "don't do that" (which is kind of hard for exceptions), or use the error suppression operator #usort() to ignore all errors.
To resolve this issue we can handle as below
1) use error_reporting
$a = array('id' => 2,'val' => 3, 'ind' => 3);
$errorReporting = error_reporting(0);
usort($a);
error_reporting($errorReporting);
2) use #usort($a);
$a = array('id' => 2,'val' => 3, 'ind' => 3);
#usort($a);
What version of PHP is on the linux box?
Are the error_reporting levels the same on both boxes? Try setting them both to E_ALL.
The warning is almost certainly not lying. It's saying that the comparison function you're passing to usort() is changing the array that you're trying to sort - that could definitely make usort take a long time, possibly forever!
My first step would be to study the comparison function, and figure out why that's happening. It's possible that if the linux boxes are using a pre-5.3 version, there is some difference in the behavior of some language function used in the comparison function.
I found that using PHP5.4 , logging with error_log($message, $message_type, $destination, $extra_headers) causes this error , when I clean log entries my problem solved. Logging may temporarily be suspended by disabling and restoring logging after sort function.