I need to run one php file 100000 times at a time. for that i used a exec command in a php file (runmyfile.php) and called that file using putty.
The runmyfile.php file is have the following code.
for($i=0;$i<100000; $i++){
exec('php -f /home/myserver/test/myfile.php > /dev/null &');
}
It execute myfile.php file 100000 times in parallel.
This myfile.php fetches rows from mysql database table and perform some calculations and insert this values to another table.
But when running 100000 times it hangs out the server. I'm using centos as server.
Some times I'm getting resource unavailable error too.
If I run it 1000 times it works ok.
when I checked the following ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 514889
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
and my mysql max_connection is 200000
Is there any settings that I need to change. So that I can execute my php file 100000 times properly.
Maybe you need to redesign your application. If you have the need to process 2 billion records in a Mysql database at a daily basis, I would say that running 100000 scripts in parallel is not the best way.
This would mean that each script process 20000 records, if I understand you correctly. It is not possible to process more records in every script?
Have a look at Big Data
Related
PHP Warning: exec(): Unable to fork [rm some_file.txt] in some.php on
line 111
There is a question that has been asked before about this subject PHP Warning: exec() unable to fork I have similar problem but it not the same.
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31364
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 31364
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
My limits are shown above and it looks like there is nothing with low limit on server that can affect this error.
I tried to unsetting variables after using them both with unset and by setting them to null to free up memory. But it has no effect.
unset($var);
$var = null;
Unable to fork error occuring because of exhausting of some resources but I can't find the reason. Can you suggest me to which logs I should look?
Any ideas or workaround for this problem?
Any ideas or workaround for this problem?
The problem is likely a flaw in your code - like it was in https://stackoverflow.com/a/20649541/2038383. So the work around is fixing it.
Can you suggest to me in which logs I should look?
There is your PHP logs, then your system / kernel logs.
You already know where to get the PHP log and what is in it by default. Unfortunately your not going to get much more out of PHP. You could catch the error yourself with set_error_handler() but that won't give you any more useful info (it'll give you PHP's "errno" but not UNIX's errno).
As for system logs; I said in comments check your syslog. There might be something in there, and it's always a good starting point. But actually you won't generally see ulimit violations in syslog. Some will get logged (example stack size generates segfault which gets logged) but many won't. This post deals with how to get logs of ulimit violations: https://unix.stackexchange.com/questions/139011/how-do-i-configure-logging-for-ulimits. Surprisingly non trivial.
The way system ulimit violations are supposed to be reported by system call is by setting an errno. For example, if max user processes is hit fork() will return EAGAIN.
So ... you need to get at that UNIX errno to know what is really going on. Unfortunately I don't think there is a way in PHP (there is posix_errno(), but pretty sure that is limited to PHP's posix_XXX function library). Also note, it's PHP generating the "Unable to fork" message. How that maps to actual system call error is not completely transparent.
So best off looking at other ways to debug of which there are plenty. Like system monitoring tools: ps, dstat, strace might be a good start.
Is there any fixed time duration as how long a background task can run?
This is how I run the script (background task) manually:
php /var/www/html/app_v2/console.php massbulkinsert app.example.com 10 > /dev/null &
this script process huge data set, it takes about 1 hour to complete.
First time it stopped at 10100th record. second time it stopped at 9975th record. There is no pattern of it terminating.
top command and the mysql pid was at 98% and 100% and 130% most of the time and the free memory had about 200 MB. There is enough disk space.
Its a bit of a wild guess, but usually when you succeed with a smaller amount of data - and then gets crashes with larger amounts, it has to do with memory issues.
You should have a look at /etc/php5/cli. There is probably also a folder named cgi inthere - depending how your framework executes the background script i would expect either of these two configurations are used.
Files with extensions called 'ini' are configurations for PHP scripting, and these are among the values that you're interested in (values are defaults on debian 8):
; Maximum execution time of each script, in seconds
; http://php.net/max-execution-time
; Note: This directive is hardcoded to 0 for the CLI SAPI
max_execution_time = 30
; Maximum amount of memory a script may consume
; http://php.net/memory-limit
memory_limit = -1
Note, that there is also a timeout for how long the script can spend, reading the data sent to it through, say a pipe (max_input_time). But seeing your command, youre not piping values to it via stdin - but most likely reading a file already on the disk.
Hope it helps
This is my IM command:
/usr/bin/convert
'src.tif'
-limit memory 0
-limit map 0
-limit file 0
-alpha transparent
-clip
-alpha opaque
-resize 800x600
'end.png'
2>&1
So this will remove the white background of my TIFF by clipping the path that is given in the file. It will be resized and saved as transparent PNG.
I got no errors from IM running this.
But if I run this command with PHP to execute it on about 13000 files - I sometimes get these errors:
sh: line 1: 25065 Killed /usr/bin/convert \
'public_html/source_files/XXXX123/XXXX123/XXXX123.tif' \
-limit memory 0 -limit map 0 -limit file 0 -alpha transparent \
-clip -alpha opaque -resize 800x600 \
'public_html/converted/XXXX123/XXXX123/XXXX123_web.png' 2>&1
sh: line 1: 25702 Killed /usr/bin/convert \
'public_html/source_files/XXXX123/XXXX123/XXXX123.tif' \
-limit memory 0 -limit map 0 -limit file 0 -alpha transparent \
-clip -alpha opaque -resize 800x600 \
'public_html/converted/XXXX123/XXXX123/XXXX123_web.png' 2>&1
But the bigger problem is: Some of the pictures are broken. Below is a "bad" image on the left, a "good" image on the right (ondrag/on a dark background you see the problem better):
On running the command manually the result was ok. Only on running this PHP loop script will provide broken results. ( PHP loop script )
I run the script this way: php55 run.php. A simple loop with find as shell script provides same results.
So I searched, asked in the IM discourse server and run this procedure on 2 machines with different distribution (Debian Wheezy, Ubuntu Server 14.04)
Note/EDIT 1: Running the command in the terminal with the same file provides a perfect result.
EDIT 2: Added example TIFF file here
I'm not sure if this is an answer. For now it is pure speculation. So here goes...
By setting the limits to a 0 value, you are basically telling ImageMagick: "Your resources are not limited at all. You do not need to care for any limits."
What if didn't set any limit? Remove all -limit ... 0 parts from your command. In this case ImageMagick would use its built-in defaults, or the otherise defined settings (which may be contained in the policy.xml file of your IM installation, or through various environment variables). You can query the current limits of your system with the following command:
identify -list resource
On my system, I get these values:
File Area Memory Map Disk Thread Throttle Time
---------------------------------------------------------------------------
192 4.295GB 2GiB 4GiB unlimited 1 0 unlimited
What if you did set these limits to a reasonable value, that matches your system's really available resources? Assuming you have: 8 GByte of RAM, 50 GByte of free disk space and plenty of free inodes on your disk volume. Then try to set it like this: -limit disk 10GB -limit memory 3GB -limit map 6GB.
ImageMagick resource management
For all its processing and intermediate steps, ImageMagick needs access to an intermediate pixel cache memory/storage, before it can deliver the final result.
This need for pixel cache storage can be satisfied by different resources:
heap memory,
anonymous memory map,
disk-based memory map,
direct disk.
ImageMagick makes use of all these resources progressively:
Once heap memory is exhausted, it stores pixels in an anonymous map.
Once the anonymous memory map is exhausted, it creates the pixel cache on disk and attempts to memory-map it.
Once memory-map memory is exhausted, it simply uses standard disk I/O.
Disk storage is cheap but very slow too: it is in the order of 3 magnitudes (a thousand times) slower than memory. Some speed improvements (up to 5 times) can be obtained by using memory mapping to the disk-based cache.
ImageMagick is aware of various ways to control the amount of these resources:
Built-in default values. These limits are: 768 files, 3GB of image area, 1.5GiB memory, 3GiB memory map, and 18.45EB of disk space.
policy.xml config file. Please look up what's in your own policy.xml file. Use convert -list policy to find the location of this file first. Then use cat /some/path/policy.xml to see its contents. (The file uses an XML syntax. Don't forget: anything enclosed in <!-- and --> is a comment!) It also contains comments explaining various details. The policy.xml can define much more things than just the available limit resources. Settings in policy.xml take precedence over the built-in default values if they are defined there.
Environment variables. Here is a list of environment variables which can limit IM resources: MAGICK_AREA_LIMIT (image area limits), MAGICK_DISK_LIMIT (disk space limit), MAGICK_FILE_LIMIT (maximum no. of open files limit), MAGICK_MEMORY_LIMIT (heap memory limit), MAGICK_MAP_LIMIT (memory map limit), MAGICK_THREAD_LIMIT (maximum no. of threads limit) and MAGICK_TIME_LIMIT (maximum elapsed time in seconds). These environment variables, if set, take precedence over the policy.xml config file.
-limit <name> <value> settings on command line. The following <names> are recognized:
width (maximum width of an image). When limit is exceeded, exception is thrown and processing stops.
height (maximum height of an image). When limit is exceeded, exception is thrown and processing stops.
area (maximum number of bytes for any single image to reside in pixel cache memory). When limit is exceeded, automagical caching to disk (possibly memory-mapped) sets in.
memory (maximum memory allocated for the pixel cache from anonymous mapped memory or heap).
map (maximum amount for memory map allocated for pixel cache).
disk (maximum amount of disk space permitted for use by pixel cache). When limit is exceeded, pixel cache is not created and a fatal exception is thrown.
files (maximum number of open pixel cache files). When limit is exceeded, all subsequent pixels cached to disk are closed and reopened on demand.
thread (maximum number of threads which can in parallel).
time (maximum time in seconds a process is permitted to execute). When this limit is exeeded, an exception is thrown and processing stops.
The -limit setting on a command line takes precendence and overrides all other settings.
I'm using a php script to create image thumbnails and this error is thrown while creating some thumbs:
Fatal error: Allowed memory size of 31457280 bytes exhausted (tried to
allocate 227 bytes)
this is what top shows:
top - 07:43:49 up 44 days, 22:21, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 170 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 6097648k total, 3459060k used, 2638588k free, 566924k buffers
Swap: 4194296k total, 0k used, 4194296k free, 1991920k cached
I haven't looked at optimizing phpthumb code. But is there any other way to free the already used memory? May be a cron job can be used to free this memory on regular intervals?
Your image is probably larger than ~10-15MB. PHP has a limit on the amount of memory it can take up per script (memory_limit in php.ini)
What happens is that you load an image in memory (And then resize it, creating a second image)...
Change the memory limit if you're allowed, or don't load such large image ...
AFAIK there is no stream image reader ...
If you can't change the memory limit, a workaround might be calling the commandline ImageMagick or GraphicsMagick tools if they're installed ...
This is a typical php.ini problem, if you are running this script on a VPS or a dedicated server, edit the php.ini file and set memory_limit to 99(or more)MB, also look out for max_run_time as that can stop a script after x number of seconds.
Don't forget to reboot Apache after you have done the changes,
If you are running this on a shared server, you might have some problems trying to solve this as you can't edit the settings file, you can try to set the settings in the actual script, however this usually doesn't .
I've got a PHP script that I call to run MySQL database backups to .sql files, TAR/GZip them and e-mail them to me. One of the database is hosted by a different provider than the one providing the web server. Everything is hosted on Linux/Unix. When I run this command:
$results = exec("mysqldump -h $dbhost -u $dbuser -p$dbpass $dbname > $backupfile", $output, $retval);
(FYI, I've also tried this with system(), passthru() and shell_exec().)
My browser loads the page for 15-20 seconds and then stops without processing. When I look at the server with an FTP client, I can see the resulting file show up a few seconds later and then the file size builds until the database is backed up. So, the backup file is created but the script stops working before the file can be compressed and sent to me.
I've checked the the max_execution_time variable in PHP and it's set to 30 seconds (longer than it takes for the page to stop working) and have set the set_time_limit value to as much as 200 seconds.
Anyone have any idea what's going on here?
Are you on shared hosting or are these your own servers? If the former your hosting provider may have set the max execution time to 15-20secs and set it so it cannot be overridden (I have this problem with 1&1 and these type of scripts).
Re-check the execution-time-related parameters with a phpinfo() call... maybe it's all about what Paolo writes.
Could also be a (reverse) proxy that is giving up after a certain period of inactivity. Granted it's a long shot but anyway.... try
// test A
$start = time();
sleep(20);
$stop = time();
echo $start, ' ', $stop;
and
// test B
for($i=0; $i<20; $i++) {
sleep(1);
echo time(), "\n";
}
If the first one times out and the second doesn't I'd call that not proof but evidence.
Maybe the provider has set another resource limit beyond the php.ini setting.
Try
<?php passthru('ulimit -a');
If the command is available it should print a list of resources and their limits, e.g.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 4095
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4095
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Maybe you find some more restrictive settings than that on your shared server.
Do a manual dump and diff it against the broken one. This may tell you at which point mysqldump stops/crashes
Consider logging mysqldump output, as in mysqldump ... 2>/tmp/dump.log
Consider executing mysqldump detached so that control is returned to PHP before the dump is finished
On a side note, it is almost always a good idea to mysqldump -Q