PHP script dies when run via CRON job - php

Let me start by saying I am totally not a PHP programmer - this was dumped on me to fix.
I have a function that sits in a file by itself, that basically looks like this:
<?php
function UploadFile($source, $destination){
$debugLogPath = '/biglongpath/debug.log';
file_put_contents($debugLogPath,PHP_EOL . "Beginning UploadFile function", FILE_APPEND);
set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib');
require_once('Net/SFTP.php');
...Rest of the ftp code here...
}
?>
It's using phpseclib. If I run the main PHP script (that calls this function...) via a web browser, everything works great. When I run that same script via a CRON job, it dies as soon as this function is called. I've verified this by writing out a debug log right before calling the function - the first statement before the function is written to the log, but the "Beginning UploadFile function" is never written.
I'm guessing that maybe it has something to do with the require_once statement - maybe either a path problem or permissions issue when it's executed via CRON?
I've tried wrapping the entire function contents in a try/catch and writing out the Exception, but it still just dies.

I wonder why there are 3 helpful flags, when the question states, that the file is being written? However, this is not the CLI error log and therefore will not automagically log any errors there.The second one suggestion made appears more likely, while there are even more possibilities:
Make sure that these modules are being loaded for the PHP-CLI:
libsodium, openssl, mcrypt, gmp (as the composer.json hints for).
Running php --ini should show which INI files were loaded. Even if the corresponding INI files are there, make sure the instructions inside them are not commented out with a ;.
Manually running the script from CLI, as the user which runs the cronjob suggested, with error reporting enabled. If this shouldn't help, single-step into it with xdebug, to see where exactly it bugs out (NetBeans, Eclipse, VS Code and a few other IDE do support PHP debugging). This requires some effort to set it up, but then it provides a far better debugging methodology.

Related

When running CRON, 'require' fails silently

I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.

PHP cli script does not output anything

So I have a php script which I execute using the following command:
php -f my_script.php myArguments
The script is under version control using svn. I just updated it, pasted the command to run it into a terminal, and executed it. However, there is no output. Not a failure message, not it printing anything, nothing. It looks like it never starts. Kind of like the following:
me:/srv/scripts# php -f my_script.php myArguments
me:/srv/scripts#
Other scripts will run just fine.
It is difficult for me to come up with an SSCCE, as I can't really share the code that is causing this, and I haven't been able to replicate this behavior intentionally. I have, however, seen this twice now. If I save my changes, revert the file, and paste them back in, there is a strong chance it will run just fine.
However, I am concerned by not knowing what is causing this odd behavior. Is there a whitespace character or something that tells PHP not to start, or output anything?
Here is what I've tried after seeing this behavior:
Modifying the script so it is a simple echo 'hello'
Putting nonsense at the beginning of the script, so it is unparseable.
Pasting in code from a working script
Banging my head on a wall in frustration
Trying it in another terminal/putty ssh connection.
Here's where it gets interesting: It actually works in a different terminal. It does everything as expected.
So does anyone have any ideas what might be causing this, or things I should try in order to determine the problem?
EDIT:
The "different terminal" is still the terminal application, just a new one.
I have sufficient permissions to execute the file, but even if I didn't, it should spit out a message saying I don't.
I intentionally introduced syntax errors in hopes that I would get PHP to spit out a parse error. There was still no output.
display_errors might be disabled before runtime. You can turn it on manually with the -d switch:
php -d display_errors=1 -f my_script.php myArguments
I came across the same issue, and no amount of coercing PHP to display_errors or checking for syntax with -l helped
I finally solved our problem, and perhaps you can find some help with this solution
Test your script without using your php.ini:
php -n test_script.php
This will help you hone in on the real cause - PHP configuration, someone else's script, or your script
In my case, it was a problem with someone else's script which was being added via the auto_prepend_file directive in the php.ini. (Or more specifically, several files and functions later as I drilled through all the code adding debug as I went - on a side note, you may find that using fwrite(STDOUT, "debug text\n"); invaluable when trying to debug this type of issue)
Someone had added a function that was getting run through the prepend file, but had used the # symbol to suppress errors on a particular function call. (You might have a similar problem but not specifically related to the php.ini if you have any includes in your test script bringing in other code)
The function was failing and caused the silent death of PHP, nothing to do with my test script
You will find all sorts of warnings about how using the # symbol causes the exact problem I had, and perhaps you're having, http://php.net/manual/en/language.operators.errorcontrol.php.
Reproduction of similar symptoms:
Take a fully functional PHP environment, and break your CLI output by adding this the top of your script
#xxx_not_a_real_function_name_xxx();
So you may just have a problem with the php.ini, or you (or someone else) may have used # not realising the serious (and frustrating and time consuming) consequences that it causes in debugging
I experienced PHP CLI failing silently on a good script because of a memory limit issue. Try with:
php -d memory_limit=512M script.php

How to catch the result of a background PHP script launched from inside PHP?

I've got some PHP code that I want to run as a background process. That code checks a database to see if it should do anything, and either does it or sleeps for awhile before checking again. When it does something, it prints some stuff to stdout, so, when I run the code from the command line, I typically redirect the output of the PHP process to a file in the obvious way: php code.php > code.log &.
The code itself works fine when it's run from the shell; I'm now trying to get it to run when launched from a web process -- I have a page that determines if the PHP process is running, and lets me start or stop it, depending. I can get the process started through something like:
$the_command = "/bin/php code.php > /tmp/code.out &";
$the_result = exec($the_command, $output, $retval);
but (and here's the problem!) the output file-- /tmp/code.out -- isn't getting created. I've tried all the variants of exec, shell_exec, and system, and none of them will create the file. (For now, I'm putting the file into /tmp to avoid ownership/permission problems, btw.) Am I missing something? Will redirection just not work in this case?
Seems like permission issues. One way to resolve this would be to:
rename your echo($data) statements to a function like fecho($data)
create a function fecho() like so
.
function fecho($data)
{
$fp = fopen('/tmp/code.out', 'a+');
fwrite($fp, $data);
fclose($fp);
}
Blurgh. After a day's hacking, this issue is finally resolved:
The scheme I originally proposed (exec of a statement with
redirection) works fine...
...EXCEPT it refuses to work in /tmp. I
created another directory outside of the server's webspace and opened
it up to apache, and everything works.
Why this is, I have no idea. But a few notes for future visitors:
I'm running a quite vanilla Fedora 17, Apache 2.2.23, and PHP 5.4.13.
There's nothing unusual about my /tmp configuration, as far as I know (translation: I've never modified whatever got set up with the basic OS installation).
My /tmp has a large number of directories of the form /tmp/systemd-private-Pf0qG9/, where the latter part is a different set of random characters. I found a few obsolete versions of my log files in a couple of those directories. I presume that this is some sort of Fedora-ism file system juju that I will confess to not understanding, and that these are orphaned files left over from some of my process hacking/killing.
exec(), shell_exec(), system(), and passthru() all seemed to work, once I got over the hump.
Bottom line: What should have worked does in fact work, as long as you do it in the right place. I will now excuse myself to take care of a large bottle of wine that has my name on it, and think about how my day might otherwise have been spent...

PHP: trace script process flow

Is it possible within a PHP script to start a trace log and activate a debugging log.
I am not looking for eclipse + xdebug, but something like this use-case:
When script starts, it checks if $_GET["debugme"] is set. If yes, say start_trace_log().
Anything that happens after that in the rest of the script, should be logged, e.g.
scriptA.php :10 include("anotherscript.php")
anotherscript.php:1 foo()
...
At the moment, I have to manually do this for any script that i am interested to log and everywhere the script has to check $_GET["debugme"] instead of simply debugging ALL within this script run. Very uncomfortable for ocassionally checking scripts.
Any better ideas or comfortable ways of tracing php scripts from a start point to the last line?
Add this line to the end of the end or footer script:
if(isset($_GET["debugme"]))debug_print_backtrace();
that will print details like #... function-name() called at script-path.php:linenumber.
Don't forget to estrict the debugme feature to run on development system only!
phptrace may be a better choice cause you needn't to change your script and you can trace at anytime you want.
Although I find it highly annoying when this is used in production, you can throw this bit of code:
if(isset($_GET['DEBUGME'])) {
start_trace_log();
}
Into a file, and then adding that file to your PHP auto_prepend_file so it gets run at the start of every PHP script.
Of course, this is assuming that you've already coded and included start_trace_log(). You should also only include this on a development server. Scripts with debug flags shouldn't make it to production.

Is it possible to check PHP file syntax from PHP?

I load dynamically PHP class files with autoload.
And those files could be missing or corrupted by some reason.
Autoload will successfully report missing files so application logic could handle that. But if those files are corrupted, then the whole processing halts with blank screen for the user and "PHP Parse error: syntax error" in error log.
Is it possible to check syntax of PHP file from PHP code?
I've looked here: http://us.php.net/manual/en/function.php-check-syntax.php - it's deprecated.
And
exec("php -l $file");
seems to be a wrong way (http://bugs.php.net/bug.php?id=46339)
Thoughts?
You really shouldn't try to check for non-correct PHP files at execution time : it'll kill the response time of your application !
A "better way" would be to use php -l from command line when you're done modifying a PHP script ; or include it in your build process if you're using one ; or plug it as an SVN pre-commit hook if you're using SVN and can define SVN hooks.
In my opinion, almost any given solution would be better than checking that yourself at execution time !
Considering errors like the ones you want to avoid will probably won't happen often, it is probably better to... just let them happen.
ONly thing is : activate logs, and monitor them, the be able to detect quickly when tere is a problem :-)
Of course, this doesn't prevent you from dealing with the case of missing files ; but that's a different matter...
Another way you can make one php file in your root directory called
checkSyntax.php
<?php
for($i=1; $i < count($argv); $i++){
$temp = "php -l " . $argv[$i];
$output = exec($temp);
echo "\n$output";
}
?>
now, open your bashrc file to make a shortcut to run this file.
add below line to run checkSyntax.php
alias checkSyntaxErrors='php /root/checkSyntax.php'
and now goto your source directory do svn st.
it shows you list of files, now easily run the command.
checkSyntaxErrors file1.php file2.php .......
this will check all your files passing as arguments.
enjoy :)
In short: i can't see a way to do this, but have an idea which might be sufficient.
There are log monitoring programs or can filter the logs via standard tools for files with parse errors. If a file appears, you put the villain filename into a black list and your autoloader checks before load against this list.
With this method, at first time you'll serve a blank screen (assumig error reporting to the output are turned on on production servers) but the second will have a page without the faulty component.
In the autoloader you should have a list or naming scheme to always try to loading mandatory classes (other ways your application might be in an inconsistent state)
You could also do some unit testing, where you load the PHP you're dynamically executing and assert that exec("php -l $fileName") is valid. If you did that you'd be able to verify it once in your tests, generating it with appropriate variables, and have a reasonable level of confidence your PHP was good.
This is an old question, but it seems in recent php versions we can do this
try {
include_once($file);
} catch (\ParseError $e) {
// Parse error
} catch (\Throwable $e) {
// Any other error
}

Categories