I'm running a large Wordpress multisite install that, for each site, runs a number of database queries to display information in the respective blog. The data queries aren't too heavy however I often see this in my error log:
PHP Fatal error: Allowed memory size of 1572864000 bytes exhausted (tried to allocate 97 bytes) in /home/********/public_html/wp-includes/wp-db.php on line 1775
When this occurs I believe the page being called (that causes the error) stops loading and the user has to reload to access the information. I've been through every page being called an all load on their own without any issue.
Looking at the relevant line in the wp-db.php file this is the line that causes the error:
preg_match( '/^\s*(create|alter|truncate|drop)\s/i', $query ) ) {
$return_val = $this->result;
i.e. when a database query is being executed. Something is obviously going quite wrong here as I've tried upping my memory limit for php resources. Does anyone know how I would go about identifying what is causing this error so I can fix it?
Put following line of code in your wp-config.php file.
define( ‘WP_MEMORY_LIMIT’, ‘2000M’ ); // Value must be greater than current value
Please let me know if you need further help.
Thanks!
Related
I have a very simple query to get records from the database:
\DB::table("table")->get();
When I try to get more than ±145000 records from the database I am getting:
500 server error.
The code like:
\DB::table("table")->take(14500)->get();
although works. When I try to get more than 15k I get the error immediately without any loading or further information.
I cannot get any more info from logs as well. An odd thing is that when I write that code to tinker - I can get all records. (with eloquent works the same)
If you'd check your error log you will most likely see something along the lines of:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 54 bytes)
It would be better to chunk your results instead of loading it all at once to memory
\DB::table("table")->chunk(500, function($results) {
foreach($results as $result) {
do your thing
}
});
I have written a script that search for values in xml file. This files I retrieve online via the following code
# Reads entire file into a string
$result = file_get_contents($xml_link);
# Returns the xml string into an object
$xml = simplexml_load_string($result);
But the xml are somethimes are big with as consequence that I got the following error Fatal error: Maximum execution time of 30 seconds exceeded.
I have adapted the ini.php files to max_exucution time set to a 360 seconds but I still got the same error.
I have two options in mind.
If this error occurs run the line again. But I couldn't find anything online (I am probably searching with wrong searchterms). Is there a possibility to run the line where the error occurs again?
Save the xml files temporary local and search for the information in this way and remove in the end of the process. Here , I have no idea how to remove them after retrieving all data? And would this actually solve the problem? Because my script still needs to search through the xml file? Will it not take the same amount of time?
When I used these two lines in my script the problem was solved.
ini_set('max_execution_time', 300);
set_time_limit(0);
I wrote an error handler for my website that looks like this:
function errorHandler($number, $string, $file, $line, $context, $type = '') {
// save stuff in DB
}
Which I register like this:
set_error_handler('errorHandler', E_ALL);
I save all of the passed variables in a DB, as well as a backtrace to help me debug the problem:
print_r(debug_backtrace(DEBUG_BACKTRACE_PROVIDE_OBJECT), true)
The problem is that I sometimes get this error:
Allowed memory size of 134217728 bytes exhausted (tried to allocate 30084081 bytes)
The reason the error handler was run when it gave the above error was that I tried to use an undefined variable after having created an Amazon S3 object (from their PHP AWS library). I'm assuming since the Amazon AWS library is so huge that the backtrace is pulling in a ton of data, which causes the out of memory error (?).
I want to include a backtrace when possible to help with debugging, but how do I prevent calling the debug_backtrace() function from causing a fatal error (inside my error handler, which is kind of ironic..)?
I suspect you simply need to remove the DEBUG_BACKTRACE_PROVIDE_OBJECT
It could be that your code has circular references in the objects that mean that when dumping them it loops until all memory is consumed.
Another alternative way to do this is to throw and catch an Exception and then use this to get your backtrace
try{
throw new Exception();
}catch(Exception $e){
echo $e->getTraceAsString();
}
http://php.net/manual/en/exception.gettraceasstring.php
Or if you need verbosity then try print_r($e->getTrace());
http://php.net/manual/en/exception.gettrace.php
I would recommend if at all possible replacing DEBUG_BACKTRACE_PROVIDE_OBJECT with DEBUG_BACKTRACE_IGNORE_ARGS
http://php.net/manual/en/function.debug-backtrace.php
You could set a limit to your debug_backtrace.
print_r(debug_backtrace(DEBUG_BACKTRACE_PROVIDE_OBJECT, 50), true);
PHP debug_backtrace
first check if it is really memory which is exhausting.
once i had this error which i traced to an infinite loop.
since your server does not allow ini_set , i suggest you to copy it at a local system and increase memory to 1gb and check if memory still get exhausted ,it is an infinite loop .
Probably not the best answer but have you considered using var_dump instead of print_r? var_dump only goes down a couple of nesting levels so the issues with circular references that edmondscommerce mentioned would not cause your memory to become exhausted.
Try to increase memory limit:
ini_set("memory_limit","256M");
print_r(debug_backtrace(DEBUG_BACKTRACE_PROVIDE_OBJECT), true);
In a previous question I posted, I mentioned that we randomly get a 500 Internal Server Error alerts on Opencart checkout. This seems to occur on AJAX calls to a backend .php/.tpl file. The temporary solution is to reprocess the ajax when the error alert triggers. For now, the solution is working as we have yet to get a 500 Internal Server Error alert.
I am trying to look deeper into the problem. Hopefully, I can come up with a better fix.
The only lead I have is on cpanel logs. On resource monitor, the virtual memory seems to reach its limit when the error generates. On the error log, this is generaled as well:
'(12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp...'
My assumption is there could be a memory issue somewhere along the pages being called on checkout.
I've created a table log on the database and written a model (PHP) that checks the memory usage each time it's called:
class ModelToolMem extends Model {
public function memory_usage_logs($page_route, $page_process){
$peak = memory_get_peak_usage();
$peak_kb = $peak/1024;
$peak_mb = $peak_kb/1024;
$peak_mb = number_format($peak_mb, 2, '.', ',');
$peak_details = "peak memory usage = $peak_mb MB";
$sql = "INSERT INTO table_memory_log (page, process, details, date_time) VALUES ('$page_route', '$page_process', '$peak_details', NOW())";
$this->db->query($sql);
}
}
The model gets called via scripts on each page called. A sample script is as follows:
$this->load->model('tool/res_mon');
$this->model_tool_mem->memory_usage_logs('quickcheckout/some_page', 'some_page.php > validate');
As I do a test checkout, the memory allotted to the running script is recorded on the database. The biggest memory I observed on the log is 3.00 MB, which would be normal?
However, as I finished doing a test run and checked on the cpanel logs, I saw that there were instances when the virtual memory reached its limit and this appeared in the error logs:
'(12)Cannot allocate memory...'
But when I checked on the logs in the database, I did not see any unusual increase in the memory.
What would be a better way to monitor the virtual memory usage when a PHP script is executed? Is there other PHP command aside from memory_get_peak_usage()?
Thanks
I am using the following code fragment in a php script to safely update a shared resource.
$lock_id = sem_get( ftok( 'tmp/this.lock', 'r'));
sem_acquire($lock_id)
//do something
sem_release($lock_id)
When I stress test this code with large number of requests I get an error:
Warning: semop() failed acquiring SYSVSEM_SETVAL for key 0x1e: No space left on device in blahblah.php on line 1293
php sources show the following code for failed acquiring SYSVSEM_SETVAL
while (semop(semid, sop, 3) == -1) {
if (errno != EINTR) {
php3_error(E_WARNING, "semop() failed acquiring SYSVSEM_SETVAL for key 0x%x: %s", key, strerror(errno));
break;
}
}
which means semop fails with EINTR. man page reveals that the semop() system call was interrupted by a signal.
My question is can I safely ignore this error and retry sem_acquire?
Edit: I have misunderstood this problem, Pl see the clarification I have posted below.
raj
I wouldn't ignore the ENOSPC (you're getting something other than EINTR, as the code shows). You may end up in a busy loop waiting for a resource that you have earlier exhausted. If you're out of some space somewhere, you want to make sure that you deal with that issue. ENOSPC generally means you are out of...something.
A couple of random ideas:
I am not an expert on the PHP implementation, but I'd try to avoid calling sem_get() each time you want the semaphore. Store the handle instead. It may be that some resource is associated with each call to sem_get, and that is where you're running out of space.
I'd make sure to check your error returns on sem_get(). It's a code snippet, but if you were to fail to get the sema4, you would get inconsistent results when trying to sem_op() it (perhaps EINTR makes sense)
After posting this question I noticed that I misread the code as errno == EINTR and jumped into conclusion. So as bog has pointed out, the error is ENOSPC and not EINTR. After some digging I located the reason for ENOSPC. The number of undo buffers were getting exhausted. I have increased the number of semmnu and now the code is running with out issues. I have used semmni*semmsl as the value of semmnu