My Memcached instance expires at 300 seconds. If I set it to something smaller than 300 seconds it works properly. However, if I set it for anything over 300 seconds it doesn't matter, it will always expire at 300 seconds.
I am using php and here is my code...
$memcache = new Memcached();
$memcache->addServer("localhost", 11211);
$cacheData = $memcache->get($key);
if(!$cacheData){
$cacheData = database results;
$memcache->set($key, $cacheData, 600);
}
return $cacheData;
I am just getting started with this project so I only have 1 api db call being stored. Is there some reason in my settings this might be happening, somewhere in the php.ini file?
So many of my questions end up being issues with the server configuration and not the code.
I contacted my server admin for the tenth time today and begged him to check the logs to figure it out.
Turns out, they have a cPanel monitoring system in place and it was seeing that the memcached server was being run by user 'nobody' so it was shutting it down every 3 minutes. Then, my script would start it up again and the cycle would continue.
The admin had to name the memcached server user anything, like 'memcache', and then when the monitoring system checked the next time it saw a name it knew and thus didn't shut down the memcache server.
Was not sure whether to delete this question or answer it. Anyway, that's that!
Related
I have a owncloud in a Ubuntu with 5,3 GB RAM.
Every day I have a "out-of-memory" which kills mysql process and owncloud website fails. So I've to restart that server (dedicate to owncloud) every day at least one time (not a good practice...)
This is a px aux --sort -pmem |more ;
There are more than 50 process like files:scan -all.They increase continuously until OOM.
I read something about OCC but I can't get how to disable.
I try to edit mpm_prefork.conf and set;
Also read about edit overcommit_memory and set disable, but I don't want a kernel panic too.
Every process uses about 1.1% Memory. In a few hours get 100% and kernel kills MySQL and other process.
Any idea/solution?
We've inherited a platform that has a crobjob that every minute curls a local php script three times with different parameters (curl -s -o --url https://localhost/myscript.php?option=XYZ -k). That script runs for about 1 minute and its possible multiple instances with the same option overlap for a bit of time. The script logs in a different file per option given and each log starts with the timestamp when the script started so it acts as an instance identifier.
The script has this skeleton:
<?php
$option=XYZ;
$scriptId = time();
$file = "log_$option.txt";
file_put_contents($file,"\n$scriptId: Start\n",FILE_APPEND);
session_start();
$expires = time()+60;
file_put_contents($file,"\n$scriptId: Expires at $expires\n",FILE_APPEND);
while(time()<$expires){
file_put_contents($file,"\n$scriptId: Not expired at ".time()."\n",FILE_APPEND);
switch($option){
case X:
do_db_stuff();
break;
...
}
file_put_contents($file,"\n$scriptId: Will sleep at ".time()."\n",FILE_APPEND);
sleep(13);
file_put_contents($file,"\n$scriptId: Woke up at ".time()."\n",FILE_APPEND);
}
file_put_contents($file,"\n$scriptId: Finished at ".time()."\n",FILE_APPEND);
Normally this script runs fine (even if they overlap when instance A is sleeping for the last time and instance B starts) but on occasion we have two issues that we can confirm with the logs:
sometimes it sleeps for less than 13 seconds (a
variable amount of time and always less than 13);
sometimes the script stops (no more logging after the "Will sleep" one, and we can verify that no db stuff is being done). [Update on this in Edit 2]
We've looked into the possible causes but couldn't find any:
php max_execution_time is set to 240 seconds and the script never
takes more than one and a half minutes;
sleep documentation says it is per session but curl isn't using cookies so it should be different sessions in every instance (and also if it was using the same it would always block since we always execute three script instances, which it doesn't);
the hosting tech team says there are no errors neither in the server
error log nor in php error log in the timestamp where these issues
happen.
I can't reproduce the issues at will, but they happen at least once a day.
What I'd like to know is what can be interfering with the sleep behaviour? How can I detect or fix it?
Additional information:
linux system
mysql 5.5
apache
php 5.3
php max_execution_time set to 240
Edit 1: Just to clarify: actually we have 3 options, so it writes to 3 log files, one for each option. At any given time there can be up to two instances per option running (each instance of the same option overlaps a small amount of time).
Edit2: As per #Jan suggestion, I added log to the sleep function result. The script stopped once with that log already:
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:29
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:37 with sleep return 5
[2016-01-05, 13:11:01] Not expired at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Will sleep at 2016-01-05, 13:11:37
[2016-01-05, 13:11:01] Woke up at 2016-01-05, 13:11:38 with sleep return 13
... no more log from instance [2016-01-05, 13:11:01] ...
[2016-01-05, 13:12:01] Start
According to the sleep documentation:
If the call was interrupted by a signal, sleep() returns a non-zero value. On Windows, this value will always be 192 (the value of the WAIT_IO_COMPLETION constant within the Windows API). On other platforms, the return value will be the number of seconds left to sleep.
So according to the documentation and the log it seems that the sleep is being cut short due to an interrupt.
How can I know what interrupt caused this (pcntl_signal?), where did it come from and is there any way to avoid it?
Edit3: I've added code to handle signals with pcntl_signal (trying to register from signal 1 till 255) and log them, the issue still happens but the log is empty still.
You can define signal handlers with pcntl_signal.
With those handlers you can log when a interruption happens. But AFAIK you can't detect where it comes from.
Also you can use pcntl_alarm for your delayed jobs.
Check PHP Manual - PCNTL Alarm
Given your stack, the interrupt signal could come from apache. Depending on how Apache is communicating with PHP on your stack, there are several timeout options in Apache configuration that could infer with the script execution.
If your cron calls a curl on localhost, maybe it could directly call the PHP file instead, and so avoid using apache processes to keep the request alive? You'll have to edit your dedicated CLI php.ini file with the max_execution_time value.
I have been trying to get Log Analyzer to work for longer than I care to admit. I can't seem to get syslog messages to display in the Log Analyzer web-GUI, but this morning I got the following error:
"While reading the logstream, the php script timeout forced me to abort at this point. If you want to avoid this, please increase the LogAnalyzer script timeout in your config.php. If the user system is installed, you can do that in Admin Center."
I was not getting this error on Friday; only "No syslog records found." The timeout is set to 30 seconds in the config file, but I read that setting will get overwritten back to the default anyway. The database grew to over 4GB over the weekend. Does the the db size have anything to do with this?
It's pretty clear I am new to php and Log Analyzer, so any help with both would be greatly appreciated. I can post config file settings if needed.
Try to increase time. 30 second may be not enough for parse large file. For example set 600 seconds.
Another case to increase timeout - edit following line in php.ini file
; Duration of time, in seconds for which to cache realpath information for a given
; Maximum execution time of each script, in seconds
; http://php.net/max-execution-time
max_execution_time = 30;
Too many records in the database, the script does not have enough time to process everything. I keep the last 14 days in the database, that's enough.
USE Syslog;
DELETE FROM Syslog.SystemEvents WHERE ReceivedAt < DATE_SUB(NOW(), INTERVAL 14 DAY);
I have Apache 2.2 PHP 5.3 MySQL 5.5 application. A form on page1.php accepts user input. Values are passed to page2.php using GET. PHP script on page2.php runs MySQL query and shows results. Depending on the user input parameters query may run from 3 to 900 seconds.
In my tests results from any query that runs < 300 sec are shown OK. Longer running queries are completed OK on the server (I see CPU load goes from 90% to 0% after lets say 500 sec) but browser is not showing result and keeps showing in the status bar "Transferring data from my.host.org ..."
At this time when I am trying to open any page of my application in new instance of the same browser (Firefox) it says "Connecting..." on the tab header and "Waiting for my.host.org ..." in the status bar. Opening any page of my application in other browser (IE) at this time goes OK.
Below are setting I have changed/set so far but they did not help. Any ideas would be helpful. Thank you.
apache2.conf:
Timeout 300 -> 1800
php.ini:
user_ini.cache_ttl = 300 -> 1800
max_execution_time = 30 -> 1800
default_socket_timeout = 60 -> 1800
mysql.connect_timeout = 60 -> 1800
page2.php:
ignore_user_abort(1);
Considering this is a long running query, do you really need to run it in the browser window?
You should consider using a Cron job and redirecting the output to an e-mail address, PHP processes running in the CLI have, by default, unlimited execution time.
As my last piece of help, if you're running PHP as (F)CGI you may have to change the CGI setting to wait for output from the PHP process a bit longer.
I am running a huge import to my database(about 200k records) and I'm having a serious issue with my import script timing out. I used my cell phone as a stop watch and found that it times out at exactly 45 seconds every pass(internal server error)... it only does about 200 records at a time, sometimes less. I scanned my phpinfo() and nothing is set to 45 seconds; so, I am clueless as to why it would be doing this.
My max_execution_time is set to 5 minutes and my max_input_time is set to 60 seconds. I also tried setting set_time_limit(0); ignore_user_abort(1); at the top of my page but it did not work.
It may also be helpful to note that my error file reads: "Premature end of script headers" as the execution error.
Any assistance is greatly appreciated.
I tried all the solutions on this page and, of course, running from the command line:
php -f filename.php
as Brent says is the sensible way round it.
But if you really want to run a script from your browser that keeps timing out after 45 seconds with a 500 internal server error (as I found when rebuilding my phpBB search index) then there's a good chance it's caused by mod_fcgid.
I have a Plesk VPS and I fixed it by editing the file
/etc/httpd/conf.d/fcgid.conf
Specifically, I changed
FcgidIOTimeout 45
to
FcgidIOTimeout 3600
3600 seconds = 1 hour. Should be long enough for most but adjust upwards if required. I saw one example quoting 7200 seconds in there.
Finally, restart Apache to make the new setting active.
apachectl graceful
HTH someone. It's been bugging me for 6 months now!
Cheers,
Rich
It's quite possible that you are hitting an enforced resource limit on your server, especially if the server isn't fully under your control.
Assuming it's some type of Linux server, you can see your resource limits with ulimit -a on the command line. ulimit -t will also show you just the limits on cpu time.
If your cpu is limited, you might have to process your import in batches.
First, you should be running the script from the command line if it's going to take a while. At the very least your browser would timeout after 2 minutes if it receives no content.
php -f filename.php
But if you need to run it from the browser, try add header("Content-type: text/html") before the import kicks.
If you are on a shared host, then it's possible there are restrictions on the system when any long running queries and/or scripts are automatically killed after a certain length of time. These restrictions are generally loosened for non-web running scripts. Thus, running it from the command line would help.
The 45 seconds could be a coincidence -- it could be how long it takes for you to reach the memory limit.. increasing the memory limit would be like:
ini_set('memory_limit', '256M');
It could also be the actual db connection that is timing out.. what db server are you using?
For me, mssql times out with an extremely unhelpful error, "Database context changed", after 60 seconds by default. To get around this, you do:
ini_set('mssql.timeout', 60 * 10); // 10 min
First of all
max_input_time and
set_time_limit(0)
will only work with VPS or dedicated servers . Instead of that you can follow some rules to your implementation like below
First read the whole CSV file .
Then grab only 10 entries (rows) or less and make a ajax calls to import in DB
Try to call ajax every time with 10 entries and after that echo out something on browser . In this method your script will never timeout .
Follow the same method untill the CSV rows are finished .