PHP script is killed without explanation - php

I'm starting my php script in the following way:
bash
cd 'path'
php -f 'scriptname'.php
There is no output while the php script is running.
After a time, the php script responds with:
Killed
My idea is that it reached the memory_limit: ini_set('memory_limit', '40960M');
Increasing the memory limit seemed to solve the problem, but it only increased the edge.
What exactly does that Killed phrase mean?

Your process is killed. There could be a multitude of reasons, but it's easy to discard some of the more obvious.
php limits: if you run into a php limit, you'll get an error in the logfile, and probably on the commandline as well. This normally does not print 'killed'
the session-is-ended-issues: if you still have your session, then your session is obvioiusly not ended, so disregard all the nohup and & stuff
If your server is starved for resources (no memory, no swap), the kernel might kill your process. This is probably what's happening.
In anycase: your process is getting send a signal that it should stop. Normally only a couple of 'things' can do this
your account (e.g. you kill the process)
an admin user (e.g. root)
the kernel when it is really needing your memory for itself.
maybe some automated process, for instance, if you live on a shared server and you take up more then your share of resources.
references: Who "Killed" my process and why?

You could be running out of memory in the PHP script. Here is how to reproduce that error:
I'm doing this example on Ubuntu 12.10 with PHP 5.3.10:
Create this PHP script called m.php and save it:
<?php
function repeat(){
repeat();
}
repeat();
?>
Run it:
el#apollo:~/foo$ php m.php
Killed
The program takes 100% CPU for about 15 seconds then stops. Look at dmesg | grep php and there are clues:
el#apollo:~/foo$ dmesg | grep php
[2387779.707894] Out of memory: Kill process 2114 (php) score 868 or
sacrifice child
So in my case, the PHP program printed "Killed" and halted because it ran out of memory due to an infinite loop.
Solutions:
Increase the amount of RAM available.
Break down the problem set into smaller chunks that operate sequentially.
Rewrite the program so it has a much smaller memory requirements.

Killed is what bash says when a process exits after a SIGKILL, it's not related to putty.
Terminated is what bash says when a process exits after a a SIGTERM.
You are not running into PHP limits, you may be running into a different problem, see:
Return code when OOM killer kills a process

http://en.wikipedia.org/wiki/Nohup
Try using nohup before your command.
nohup catches the hangup signal while the ampersand doesn't (except the shell is confgured that way or doesn't send SIGHUP at all).
Normally, when running a command using & and exiting the shell afterwards, the shell will terminate the sub-command with the hangup signal (kill -SIGHUP ). This can be prevented using nohup, as it catches the signal and ignores it so that it never reaches the actual application.
In case you're using bash, you can use the command shopt | grep hupon to find out whether your shell sends SIGHUP to its child processes or not. If it is off, processes won't be terminated, as it seems to be the case for you.
There are cases where nohup does not work, for example when the process you start reconnects the NOHUP signal.
nohup php -f 'yourscript'.php

If you are already taking care of php.ini settings related with script memory and timeout then may be its linux ssh connection which terminating in active session or some thing like that.
You can use 'nohup' linux command run a command immune to hangups
shell> nohup php -f 'scriptname'.php
Edit:- You can close your session by adding '&' at end of command:-
shell> nohup php -f 'scriptname'.php &> /dev/null &
'&' operater at end of any comand in linux move that command in background

Related

how to daemonize a php script to be run with upstart

I have a PHP script that has been running as a cron job. The script uses the DB to see if it has anything to do, and to make sure its brethren are not already running.
I'd like to run the PHP script as a daemon with upstart.
I've set up my /etc/init/super-mailer.conf file as this:
description "super mailer"
author "Rob Nugen"
start on startup
stop on shutdown
respawn
exec sudo -u www-data php -f /var/www/super-mailer/scripts/mailer.php
I execute sudo start super-mailer and it runs once.
It doesn't run again, though. Why not?
I've also tried replacing the exec sudo line with
script
sudo -u www-data php -f /var/www/clubberia-mailer/scripts/mailer.php
end script
Do I need to change my PHP script to loop? How do I tell upstart to keep starting the script?
A daemon is a type of program that does not stop until told so. However, your script terminates itself. So yes, you need to make a loop in your script, that will re-run it every time.
However, keep in mind that just making a loop and executing your script again and again, might make it consume many CPU cycles. So, you might consider calling a function like usleep in every iteration to make the deamon a little less CPU-consuming. So, for example, you let your script run every 2 seconds.

When does a PHP script stop executing when called from CLI?

I basically have a cron job calling one script every minute. Script immediately stops, if previous script is still running (checks previous script's activity time).
So I made a bug, and the script went in to an infinite loop (I know it was called from by cron atleast one time). I created a fix and uploaded it to the server, but I'm still wondering:
How long will the bugged script run?
How can I know if it is still running?
What does terminate a script and why?
The script just echoes out the same text over and over again.
P.S. PHP's max execution time within the script is set to 0 (infinite) and I don't have a direct access to the server, only FTP.
How can I know if it is still running?
Just set up a new cron job, but have the cron command be a something that helps you debug:
a useful one would be
ps -af | grep php > /some/path/to/mylogfile.txt
the ps command lists info on running processes. with those flags, part of the output will be the original linux command that started the process, and so we can grep the line and look for php because the origional command was probably something like:
php myscript.php
the output is redirected to mylogfile.txt for you to manually read after the cron job runs.
the process id should be part of the output. you can then use the kill command on that process id, again by just entering the command as a fake cron job.
Until the script runs into an timeout(max_execution_time defined in php.ini file or set_time_limit method)
Have a look at the running processes
send kill command to the script or wait till a timeout occurs
PS: you have to php.ini files - one for command line and one for Apache - be sure to Change the max_execution_time in the commandline ini file

Multi threading in PHP

In a apcahe server i want to run a PHP scripts as cron which starts a php file in background and exits just after starting of the file and doesn't wait for the script to complete as that script will take around 60 minutes to complete.how this can be done?
You should know that there is no threads in PHP.
But you can execute programs and detach them easily if you're running on Unix/linux system.
$command = "/usr/bin/php '/path/to/your/php/to/execute.php'";
exec("{$command} > /dev/null 2>&1 & echo -n \$!");
May do the job. Let's explain a bit :
exec($command);
Executes /usr/bin/php '/path/to/your/php/to/execute.php' : your script is launched but Apache will awaits the end of the execution before executing next code.
> /dev/null
will redirect standard output (ie. your echo, print etc) to a virtual file (all outputs written in it are lost).
2>&1
will redirect error output to standard output, writting in the same virtual and non-existing file. This avoids having logs into your apache2/error.log for example.
&
is the most important thing in your case : it will detach your execution of $command : so exec() will immediatly release your php code execution.
echo -n \$!
will give PID of your detached execution as response : it will be returned by exec() and makes you able to work with it (such as, put this pid into a database and kill it after some time to avoid zombies).
You need to use "&" symbol to run program as background proccess.
$ php -f file.php &
Thats will run this command in background.
You may wright sh script
#!/bin/bash
php -f file.php &
And run this script from crontab.
This may not be the best solution to your specific problem. But for the record, there is Threads in PHP.
https://github.com/krakjoe/pthreads
I'm assuming you know how to use threads, this is very young code that I wrote myself, but if you have experience with threads and mutex and the like you should be able to solve your problem using this extension.
This is clearly a shameless plug of my own project, and if the user doesn't have the access required to install extensions then it won't help him, but many people find stackoverflow and it will solve other problems no doubt ...

trouble detatching terminal sessions with PHP shell_exec()

I maintain a game server and unruly players frequently crash the application. My moderation team needs the ability to restart the server process, but allowing ssh access would be impractical/insecure, so im using shell exec to pass the needed commands to restart the server process from a web based interface. The problem is, the shell session doesnt detatch properly and thus php maintains its session untill it finally times out and closes the session/stops the server process.
Here's how I'm calling shell_exec:
$command='nohup java -jar foobar_server.jar';
shell_exec($command);
shell_exec will wait until the command you've executed returns (e.g. drops back to a shell prompt). If you want to run that as a background task, so shelL_exec returns immediately, then do
$command='nohup java -jar foobar_server.jar &';
^--- run in background
Of course, that assumes you're doing this on a unix/linux host. For windows, it'd be somewhat different.
If you try this you'd see it won't work. To fully detach in PHP you must also do stdout redirection else shell_exec will hang even with '&'.
This is what you'd really want:
shell_exec('java -jar foobar_server.jar >/dev/null 2>&1 &');
But to take this one step further, I would get rid of the web interface and make this a one-minute interval cronjob which first checks if the process is running, and if it's not start a new instance:
#!/bin/bash
if ! pidof foobar_server.jar; then
java -jar foobar_server.jar >/tmp/foobar_server.log 2>&1 &;
fi
And have that run every minute, if it finds a running process it does nothing, else it starts a new instance. Worst case scenerio after a server crash is 59 seconds downtime.
Cheers

PHP passthru() blocks with process replacement

I have a problem with PHP passthru() blocking when it is supposed to start a daemon.
I have a Node.js daemon with a bash script wrapper around it. That bash script uses a bit of process replacement because the Node.js server can't directly log to syslog. The bash script contains a command like this:
forever -l app.log app.js
But because I want it to log to syslog, I use:
forever -l >(logger) app.js
The logger process replacement created a file descriptor like /dev/fd/63 whose path is passed to the forever command as the logfile to use.
This works great when I start the daemon using the bash script directly, but when the bash script is executed using PHP passthru() or exec() then these calls will block. If I use a regular logfile instead of the process replacement then both passthru() and exec() work just fine, starting the daemon in the background.
I have created a complete working example (using a simple PHP daemon instead of Node.js) on Github's Gist: https://gist.github.com/1977896 (needs PHP 5.3.6+)
Why does the passthru() call block on the process replacement? And is there anything I can do to work around it?
passthru() will block in PHP even if you start a daemon, it's unfortunate. I've heard some people have luck rewriting it with nohup:
exec('/path/to/cmd');
then becomes:
exec('nohup /path/to/cmd &');
Personally, what I've had the most luck with is exec()'ing a wget exec to call another script (or the same script) to actually run the blocking exec. This frees the calling process from getting blocked by giving it to another http process not associated with the live user. With the appropriate flags, wget will return immediately, not waiting for a response:
exec('wget --quiet --tries=1 -O - --timeout=1 --no-cache http://localhost/path/to/cmd');
The http handler will eventually time out which is fine and should leave the daemon running. If you need output (hence the passthru() call you're making) just run the script redirecting output to a file and then poll that file for changes in your live process.

Categories