I have a script that runs on the command line, called by a crontab. In the last five attempts to run this script, it has died partway through an echo statement - the cron output only shows part of the intended echo output, and nothing after that is executed. This is a long-running script, being run through php-cli, which performs file management tasks.
Is there anything that might cause a script to die during an echo statement, without generating any other output, or a way to troubleshoot or catch potential errors during echo?
I am not sure what code I can post that will help, as this is a rather comprehensive script involving a few libraries. The echo statements are fairly simple - echo('Checking file...') might get put in the log as "Che" then no more output.
First, I would enable error-logging. Secondly, since it's PHP, it may be that you initialize a variable with a function inside the echo function, but the function exits with a fatal error (common with the PHP+I/O operations) and thus the whole script just dies.
Turn on error-logging, see which line causes the headaches.
Without seeing any code this is my best shot.
Related
I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.
I'm trying to execute a python script using php with an exec command like this:
exec("python /address/to/script.py");
I don't need the script to run to completion, so after it does what I need, I call sys.exit() from within it. Execution is passed back to the php script, which is great, however the python process is still running. I can see it in my server's process list. Is there more that's required to fully kill it?
Additional Info
The python script was written by a third party.
I know very little about python, just enough to add the sys.exit() call.
The script could still be executing some cleanup code, or you could be calling sys.exit() from a child process which will essentially be calling thread.exit(), leaving the parent process running.
Check that the sys.exit() call is in the main part of the script and that no error handling is interfering with the SystemExit exception, or alternatively you could try os._exit(). Also ensure that an ampersand (&) is not present within the command passed to exec() as this will cause the script to run as a background process.
Note that os._exit() is not favourable since it doesn't do any cleanup, and essentially ends the process immediately.
Edit To end the script from within your try block you could do something like this:
try:
# Existing Code
except SysExit:
os._exit() # quit the process
except:
# Existing error handling
Ideally the application logic should make use of message passing or something similar so that a child thread could notify the main thread that it should terminate.
This is really important as I could not find anything I am looking for in Google.
How do I know when the application (or is it more appropriate to call it a task?) executed by a command line is done? How does the PHP know if the task of copying several files are done if I do like this:
exec("cp -R /test/ /var/test/test");
Does the PHP script continue to go to next code even while the command is still running in background to make copies? Or does PHP script wait until the copy is finished? And how does a command line application notify the script when it's done (if it does)? There must be some kind of interaction going on.
php's exec returns a string so yes. Your webpage will freeze until the command is done.
For example this simple code
<?PHP
echo exec("sleep 5; echo HI;");
?>
When executed it will appear as the page is loading for 5 seconds, then it will display:
HI;
How does the PHP know if the task of copying several files are done if I do like this?
Php does not know, it simply just run the command and does not care if it worked or not but returns the string produced from this command. Thats why it better to use PHP's copy command because it returns TRUE/FALSE upon statistics. Or create a bash/sh script that will return 0/FALSE or 1/TRUE to determine if command was successful if you are going this route. Then you can PHP as such:
<?PHP
$answer = exec("yourScript folder folder2");
if ($answer=="1") {
//Plan A Worked
} else {
//Plan A FAILED try PlanB
}
?>
It waits until the exec call returns, whatever it returns.
However it might be that the exit call returns although the command it has started has not yet finished. That might be the case if you detach from the control, for example by explicitly specifying a "&" at the end of the command.
Is it possible within a PHP script to start a trace log and activate a debugging log.
I am not looking for eclipse + xdebug, but something like this use-case:
When script starts, it checks if $_GET["debugme"] is set. If yes, say start_trace_log().
Anything that happens after that in the rest of the script, should be logged, e.g.
scriptA.php :10 include("anotherscript.php")
anotherscript.php:1 foo()
...
At the moment, I have to manually do this for any script that i am interested to log and everywhere the script has to check $_GET["debugme"] instead of simply debugging ALL within this script run. Very uncomfortable for ocassionally checking scripts.
Any better ideas or comfortable ways of tracing php scripts from a start point to the last line?
Add this line to the end of the end or footer script:
if(isset($_GET["debugme"]))debug_print_backtrace();
that will print details like #... function-name() called at script-path.php:linenumber.
Don't forget to estrict the debugme feature to run on development system only!
phptrace may be a better choice cause you needn't to change your script and you can trace at anytime you want.
Although I find it highly annoying when this is used in production, you can throw this bit of code:
if(isset($_GET['DEBUGME'])) {
start_trace_log();
}
Into a file, and then adding that file to your PHP auto_prepend_file so it gets run at the start of every PHP script.
Of course, this is assuming that you've already coded and included start_trace_log(). You should also only include this on a development server. Scripts with debug flags shouldn't make it to production.
I've got a script that calls two functions, A and B, from the same class. A creates an Amazon virtual server and B destroys one, both via shell_exec()'s of Amazon's command line tools. The script, doActions.php, pulls actions from a queue. If the action is "create" it creates an instance; when the action is "destroy" it kills one.
The script works fine to execute both A and B when I execute it from the command line: php script.php.
When I put it on a cron, it runs but only successfully runs the B function. It deletes destroys instances but won't create them.
The point of failure is clearly function B. It chokes at the first and most important shell_exec, returning and echoing nothing.
echo $string = shell_exec('/home/user/public_html/domain.com/private/ec2-api-tools/bin/ec2-run-instances ami-23b6534a -k gsg-keypair -z us-east-1a');
Unless you know something specific about the way Amazon's command line tools work, please suggest to me reasons why a shell_exec might work in one case and not the other.
Another shell_exec in the same place behaves as expected:
echo $string = shell_exec ('echo overflow');
My guess is that it has to do something with permissions. But when I have it run shell_exec('whoami') it return "root," and when I su and run the command it works fine. I'm having a hard time thinking of creative ways to troubleshoot why my PHP script won't work in cron when it does from the command line. Can you suggest some?
When something runs from the command line but refuses to do so within cron, it's often an environment issue (path or some other environment variable that's needed by the code you're running).
For a start you should modify the script to output the current environment (shell_exec('env')?) at the very top and examine the output from the command line and cron.
Hopefully, there will be something obvious such as AMAZON_EC2_VITAL_VAR but, if not, you should move the cron environment towards your command line one, one variable at a time, until it starts working.
A quick test to ascertain this. From your command line, do:
env >/tmp/pax_env.sh
Then run your PHP script from a shell script which first executes:
. /tmp/pax_env.sh
so that the environments are identical.
And keep in mind that su on its own doesn't give you the same environment as you'd get from logging in directly as a specific user (su - does, I think). You may want to check the behaviour for when you log in as root directly.
Re your comment:
Yes, I do believe you've got it. I'm likely going to mark your answer as correct but need you to suffer through a few addendums about your clever solution. First of all, what's the best way to execute the pax_env.sh script? Does shell_exec() work?
Never let it be said I didn't work for my money :-) No. The shell_exec will almost certainly be running a sub-shell so the variables would be set in that sub-shell but would not affect the PHP parent process.
My advice, if you wanted all those variables set, would be to create a shell-script consisting of all the commands in /tmp/pax_env.sh (probably prefixing each with export) followed by the command you currently have running in cron, something along the lines of:
export PATH=.:/usr/bin
export PS1=Urk:
export PS2=MoreUrk:
/home/user/pax/scriptB.php
Then run that script from cron rather than /home/user/pax/scriptB.php directly. That will ensure the environment is set up before your PHP code is called.
Astute readers will have noticed the phrase "if you wanted all those variables set" above. I don't personally think it's a good idea to dump all your command line variables into the shell script for the cron job. I'd prefer to actually find out which ones are needed and only include those. That lessens the pollution your cron job has to run under. For example, it's unlikely that the PS1/PS2 prompt variables will be required for your PHP script.
If it works, you can set all the environment variables - I just prefer the absolute minimum so I don't have to worry too much when things change.
A way of finding out what's needed is to comment out one export at a time until your script breaks again. Then you know that variable is needed. Once it works with the maximum amount of export statements commented out, you can just delete those commented export statements altogether and what remains, however improbable, must be okay (with apologies to Sir Arthur Conan Doyle).