I've got a PHP script that I call to run MySQL database backups to .sql files, TAR/GZip them and e-mail them to me. One of the database is hosted by a different provider than the one providing the web server. Everything is hosted on Linux/Unix. When I run this command:
$results = exec("mysqldump -h $dbhost -u $dbuser -p$dbpass $dbname > $backupfile", $output, $retval);
(FYI, I've also tried this with system(), passthru() and shell_exec().)
My browser loads the page for 15-20 seconds and then stops without processing. When I look at the server with an FTP client, I can see the resulting file show up a few seconds later and then the file size builds until the database is backed up. So, the backup file is created but the script stops working before the file can be compressed and sent to me.
I've checked the the max_execution_time variable in PHP and it's set to 30 seconds (longer than it takes for the page to stop working) and have set the set_time_limit value to as much as 200 seconds.
Anyone have any idea what's going on here?
Are you on shared hosting or are these your own servers? If the former your hosting provider may have set the max execution time to 15-20secs and set it so it cannot be overridden (I have this problem with 1&1 and these type of scripts).
Re-check the execution-time-related parameters with a phpinfo() call... maybe it's all about what Paolo writes.
Could also be a (reverse) proxy that is giving up after a certain period of inactivity. Granted it's a long shot but anyway.... try
// test A
$start = time();
sleep(20);
$stop = time();
echo $start, ' ', $stop;
and
// test B
for($i=0; $i<20; $i++) {
sleep(1);
echo time(), "\n";
}
If the first one times out and the second doesn't I'd call that not proof but evidence.
Maybe the provider has set another resource limit beyond the php.ini setting.
Try
<?php passthru('ulimit -a');
If the command is available it should print a list of resources and their limits, e.g.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 4095
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4095
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Maybe you find some more restrictive settings than that on your shared server.
Do a manual dump and diff it against the broken one. This may tell you at which point mysqldump stops/crashes
Consider logging mysqldump output, as in mysqldump ... 2>/tmp/dump.log
Consider executing mysqldump detached so that control is returned to PHP before the dump is finished
On a side note, it is almost always a good idea to mysqldump -Q
Related
PHP Warning: exec(): Unable to fork [rm some_file.txt] in some.php on
line 111
There is a question that has been asked before about this subject PHP Warning: exec() unable to fork I have similar problem but it not the same.
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31364
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 31364
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My limits are shown above and it looks like there is nothing with low limit on server that can affect this error.
I tried to unsetting variables after using them both with unset and by setting them to null to free up memory. But it has no effect.
unset($var);
$var = null;
Unable to fork error occuring because of exhausting of some resources but I can't find the reason. Can you suggest me to which logs I should look?
Any ideas or workaround for this problem?
Any ideas or workaround for this problem?
The problem is likely a flaw in your code - like it was in https://stackoverflow.com/a/20649541/2038383. So the work around is fixing it.
Can you suggest to me in which logs I should look?
There is your PHP logs, then your system / kernel logs.
You already know where to get the PHP log and what is in it by default. Unfortunately your not going to get much more out of PHP. You could catch the error yourself with set_error_handler() but that won't give you any more useful info (it'll give you PHP's "errno" but not UNIX's errno).
As for system logs; I said in comments check your syslog. There might be something in there, and it's always a good starting point. But actually you won't generally see ulimit violations in syslog. Some will get logged (example stack size generates segfault which gets logged) but many won't. This post deals with how to get logs of ulimit violations: https://unix.stackexchange.com/questions/139011/how-do-i-configure-logging-for-ulimits. Surprisingly non trivial.
The way system ulimit violations are supposed to be reported by system call is by setting an errno. For example, if max user processes is hit fork() will return EAGAIN.
So ... you need to get at that UNIX errno to know what is really going on. Unfortunately I don't think there is a way in PHP (there is posix_errno(), but pretty sure that is limited to PHP's posix_XXX function library). Also note, it's PHP generating the "Unable to fork" message. How that maps to actual system call error is not completely transparent.
So best off looking at other ways to debug of which there are plenty. Like system monitoring tools: ps, dstat, strace might be a good start.
Is there any fixed time duration as how long a background task can run?
This is how I run the script (background task) manually:
php /var/www/html/app_v2/console.php massbulkinsert app.example.com 10 > /dev/null &
this script process huge data set, it takes about 1 hour to complete.
First time it stopped at 10100th record. second time it stopped at 9975th record. There is no pattern of it terminating.
top command and the mysql pid was at 98% and 100% and 130% most of the time and the free memory had about 200 MB. There is enough disk space.
Its a bit of a wild guess, but usually when you succeed with a smaller amount of data - and then gets crashes with larger amounts, it has to do with memory issues.
You should have a look at /etc/php5/cli. There is probably also a folder named cgi inthere - depending how your framework executes the background script i would expect either of these two configurations are used.
Files with extensions called 'ini' are configurations for PHP scripting, and these are among the values that you're interested in (values are defaults on debian 8):
; Maximum execution time of each script, in seconds
; http://php.net/max-execution-time
; Note: This directive is hardcoded to 0 for the CLI SAPI
max_execution_time = 30
; Maximum amount of memory a script may consume
; http://php.net/memory-limit
memory_limit = -1
Note, that there is also a timeout for how long the script can spend, reading the data sent to it through, say a pipe (max_input_time). But seeing your command, youre not piping values to it via stdin - but most likely reading a file already on the disk.
Hope it helps
I need to run one php file 100000 times at a time. for that i used a exec command in a php file (runmyfile.php) and called that file using putty.
The runmyfile.php file is have the following code.
for($i=0;$i<100000; $i++){
exec('php -f /home/myserver/test/myfile.php > /dev/null &');
}
It execute myfile.php file 100000 times in parallel.
This myfile.php fetches rows from mysql database table and perform some calculations and insert this values to another table.
But when running 100000 times it hangs out the server. I'm using centos as server.
Some times I'm getting resource unavailable error too.
If I run it 1000 times it works ok.
when I checked the following ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 514889
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
and my mysql max_connection is 200000
Is there any settings that I need to change. So that I can execute my php file 100000 times properly.
Maybe you need to redesign your application. If you have the need to process 2 billion records in a Mysql database at a daily basis, I would say that running 100000 scripts in parallel is not the best way.
This would mean that each script process 20000 records, if I understand you correctly. It is not possible to process more records in every script?
Have a look at Big Data
After installing APC, see the apc.php script, the uptime restart every one or two hours? why?
How can I change that?
I set apc.gc_ttl = 0
APC caches lives as long as their hosting process, it could be that your apache workers reach their MaxConnectionsPerChild limit and they get killed and respawned clearing the cache with it. This a safety mechanism against leaking processes.
mod_php: MaxConnectionsPerChild
mod_fcgid or other fastcgi: FcgidMaxRequestsPerProcess and PHP_FCGI_MAX_REQUESTS (enviroment variable, the example is for lighttpd but it should be considered everywhere php -b used)
php-fpm: pm.max_requests individually for every pool.
You could try setting the option you are using to it's "doesn't matter" value (usually 0) and run test the setup with a simple hello world php script, and apachebench ab2 -n 10000 -c 10 http://localhost/hello.php (tweak the values as needed) to see if the worker pid's are changing or not.
If you use a TTL of 0 APC will clear all cache slots when it runs out of memory. This is what appends every 2 hours.
TTL must never be set to 0
Just read the manual to understand how TTL is used : http://www.php.net/manual/en/apc.configuration.php#ini.apc.ttl
Use apc.php from http://pecl.php.net/get/APC, copy it to your webserver to check memory usage.
You must allow enough memory so APC have 20% free after some hours running. Check this on a regular basis.
If you don't have enough memory available, use filters option to prevent rarely accessed files from being cached.
Check my answer there
What is causing "Unable to allocate memory for pool" in PHP?
I ran into the same issue today, found the solution here:
http://www.itofy.com/linux/cpanel/apc-cache-reset-every-2-hours/
You need to go to AccesWHM > Apache Configuration > Piped Log Configuration and Enable Piped Apache Logs.
I am trying to export a large database via phpMyAdmin. I jeep getting an error that the script stopped because the maximum execution time of 600 seconds was reached (or something like that). I tried setting max_execution_time in php.ini to 0 and -1. The change takes effect as I can see it in phpinfo(), but I am still getting the error. Another strang thing is that originally (before I changed it to 0) it wasn't 600 either. It was 180! Where is this 600 set?
See if it is manually set somewhere. Assuming you are on a UNIX type platform:
find /path/to/root/of/phpmyadmin -name "*.php" -print0 | xargs -0 grep "max_execution_time"
Your web server can have other timeout configurations that may also interrupt PHP execution. Apache has a Timeout directive and IIS has a CGI timeout function. See your web server documentation for specific details.
Don't use phpMyAdmin to import large files. Try using the mysql CLI to import a dump of your DB. Transfer the SQL file to the server and execute the following on the server using PHP script like shell_exec or system
mysql --user=user --password=password database < database_dump.sql.
Of course the database has to exist, and the user you provide should have the necessary privilege(s) to update the database.
PHP by default places resource limits on all php scripts using the following three directives:
=> max_execution_time : Maximum execution time of each script, in seconds (default 30 seconds)
=> max_input_time : Maximum amount of time each script may spend parsing request data (60 seconds)
=> memory_limit : Maximum amount of memory a script may consume (default 8MB)
Your php script was timed out may be because of resource limits. All you need to do is setup a new resource limits so that the script will get executed.
If that doesn't work either,you can set it with set_time_limit(N) function, which sets the time limit in seconds.