PHP: trace script process flow - php

Is it possible within a PHP script to start a trace log and activate a debugging log.
I am not looking for eclipse + xdebug, but something like this use-case:
When script starts, it checks if $_GET["debugme"] is set. If yes, say start_trace_log().
Anything that happens after that in the rest of the script, should be logged, e.g.
scriptA.php :10 include("anotherscript.php")
anotherscript.php:1 foo()
...
At the moment, I have to manually do this for any script that i am interested to log and everywhere the script has to check $_GET["debugme"] instead of simply debugging ALL within this script run. Very uncomfortable for ocassionally checking scripts.
Any better ideas or comfortable ways of tracing php scripts from a start point to the last line?

Add this line to the end of the end or footer script:
if(isset($_GET["debugme"]))debug_print_backtrace();
that will print details like #... function-name() called at script-path.php:linenumber.
Don't forget to estrict the debugme feature to run on development system only!

phptrace may be a better choice cause you needn't to change your script and you can trace at anytime you want.

Although I find it highly annoying when this is used in production, you can throw this bit of code:
if(isset($_GET['DEBUGME'])) {
start_trace_log();
}
Into a file, and then adding that file to your PHP auto_prepend_file so it gets run at the start of every PHP script.
Of course, this is assuming that you've already coded and included start_trace_log(). You should also only include this on a development server. Scripts with debug flags shouldn't make it to production.

Related

PHP script dies when run via CRON job

Let me start by saying I am totally not a PHP programmer - this was dumped on me to fix.
I have a function that sits in a file by itself, that basically looks like this:
<?php
function UploadFile($source, $destination){
$debugLogPath = '/biglongpath/debug.log';
file_put_contents($debugLogPath,PHP_EOL . "Beginning UploadFile function", FILE_APPEND);
set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib');
require_once('Net/SFTP.php');
...Rest of the ftp code here...
}
?>
It's using phpseclib. If I run the main PHP script (that calls this function...) via a web browser, everything works great. When I run that same script via a CRON job, it dies as soon as this function is called. I've verified this by writing out a debug log right before calling the function - the first statement before the function is written to the log, but the "Beginning UploadFile function" is never written.
I'm guessing that maybe it has something to do with the require_once statement - maybe either a path problem or permissions issue when it's executed via CRON?
I've tried wrapping the entire function contents in a try/catch and writing out the Exception, but it still just dies.
I wonder why there are 3 helpful flags, when the question states, that the file is being written? However, this is not the CLI error log and therefore will not automagically log any errors there.The second one suggestion made appears more likely, while there are even more possibilities:
Make sure that these modules are being loaded for the PHP-CLI:
libsodium, openssl, mcrypt, gmp (as the composer.json hints for).
Running php --ini should show which INI files were loaded. Even if the corresponding INI files are there, make sure the instructions inside them are not commented out with a ;.
Manually running the script from CLI, as the user which runs the cronjob suggested, with error reporting enabled. If this shouldn't help, single-step into it with xdebug, to see where exactly it bugs out (NetBeans, Eclipse, VS Code and a few other IDE do support PHP debugging). This requires some effort to set it up, but then it provides a far better debugging methodology.

When running CRON, 'require' fails silently

I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.

PHP cli script does not output anything

So I have a php script which I execute using the following command:
php -f my_script.php myArguments
The script is under version control using svn. I just updated it, pasted the command to run it into a terminal, and executed it. However, there is no output. Not a failure message, not it printing anything, nothing. It looks like it never starts. Kind of like the following:
me:/srv/scripts# php -f my_script.php myArguments
me:/srv/scripts#
Other scripts will run just fine.
It is difficult for me to come up with an SSCCE, as I can't really share the code that is causing this, and I haven't been able to replicate this behavior intentionally. I have, however, seen this twice now. If I save my changes, revert the file, and paste them back in, there is a strong chance it will run just fine.
However, I am concerned by not knowing what is causing this odd behavior. Is there a whitespace character or something that tells PHP not to start, or output anything?
Here is what I've tried after seeing this behavior:
Modifying the script so it is a simple echo 'hello'
Putting nonsense at the beginning of the script, so it is unparseable.
Pasting in code from a working script
Banging my head on a wall in frustration
Trying it in another terminal/putty ssh connection.
Here's where it gets interesting: It actually works in a different terminal. It does everything as expected.
So does anyone have any ideas what might be causing this, or things I should try in order to determine the problem?
EDIT:
The "different terminal" is still the terminal application, just a new one.
I have sufficient permissions to execute the file, but even if I didn't, it should spit out a message saying I don't.
I intentionally introduced syntax errors in hopes that I would get PHP to spit out a parse error. There was still no output.
display_errors might be disabled before runtime. You can turn it on manually with the -d switch:
php -d display_errors=1 -f my_script.php myArguments
I came across the same issue, and no amount of coercing PHP to display_errors or checking for syntax with -l helped
I finally solved our problem, and perhaps you can find some help with this solution
Test your script without using your php.ini:
php -n test_script.php
This will help you hone in on the real cause - PHP configuration, someone else's script, or your script
In my case, it was a problem with someone else's script which was being added via the auto_prepend_file directive in the php.ini. (Or more specifically, several files and functions later as I drilled through all the code adding debug as I went - on a side note, you may find that using fwrite(STDOUT, "debug text\n"); invaluable when trying to debug this type of issue)
Someone had added a function that was getting run through the prepend file, but had used the # symbol to suppress errors on a particular function call. (You might have a similar problem but not specifically related to the php.ini if you have any includes in your test script bringing in other code)
The function was failing and caused the silent death of PHP, nothing to do with my test script
You will find all sorts of warnings about how using the # symbol causes the exact problem I had, and perhaps you're having, http://php.net/manual/en/language.operators.errorcontrol.php.
Reproduction of similar symptoms:
Take a fully functional PHP environment, and break your CLI output by adding this the top of your script
#xxx_not_a_real_function_name_xxx();
So you may just have a problem with the php.ini, or you (or someone else) may have used # not realising the serious (and frustrating and time consuming) consequences that it causes in debugging
I experienced PHP CLI failing silently on a good script because of a memory limit issue. Try with:
php -d memory_limit=512M script.php

How can I avoid showing "#!/usr/bin/php" on PHP?

I want PHP scripts to run both on command line and website (I use Apache and Nginx) so I put #!/usr/bin/php in the first line of my scripts but that appears on the website...
I solved the problem using output buffering.
My script now looks like this:
#!/usr/bin/php
<?php
#ob_end_clean();
...
Note: There is no ?> at the end of the file. This is actually a good practice when writing PHP scripts. This prevents any garbage text to be accidentally printed.
Note: The PHP documentation for ob_end_clean() says that:
The output buffer must be started by ob_start() with
PHP_OUTPUT_HANDLER_CLEANABLE and PHP_OUTPUT_HANDLER_REMOVABLE flags.
Otherwise ob_end_clean() will not work.
It seems that this is done automatically when PHP runs from command line.
There is no need to have #!/usr/bin/php in your code, just run CLI script using php, for example php /path/to/file.php or /usr/bin/php /path/to/file.php.
#ViliamSimko's wicked trick is almost there, but, unfortunately, flawed. (It did actually break my header-sending sequence, for instance, despite not polluting the output with the shebang.)
TL;DR, here's the fix*:
#!/usr/bin/php
<?php #ob_end_clean(); if(ini_get('output_buffering')) ob_start();
...
or, less obscene-looking, but still just an euphemism for the same offense ;) :
#!/usr/bin/php
<?php if (ob_get_level()) { ob_end_clean(); ob_start(); }
...
(And see also the "UPDATE" part below for perhaps the maximum we can do about this...)
Explanation:
#Floris had a very good point in the comments there:
Do you need an ob_start(); as well? Probably worth mentioning.
You sure do. But where? And when?
Cases to consider:
As Viliam said (and just confirmed it with PHP 7.2), the shebang is fortunately eaten by the php command itself, when you run your script with php yourscript.php, so the entire trick is redundant.
In web mode, it's actually config-dependent: if output_buffering is on in the config (and sure enough, it's usually on, but it's not even the default), an ob_start has already been done implicitly at the start of the script (you can check it with ob_get_level()). So, we can't just abruptly cancel it with an ob_end_clean and call it a day: we need to start another one, to keep the levels balanced!
If output_buffering is off in the config, then, sad face, we are out of luck: ob_get_clean() does nothing, and the shebang will end up in the top of the page.
Note: there is no fix for this one, other than turning it on.
In command-line mode, the manual says about output_buffering:
This directive is always Off in PHP-CLI.
But, instead of failing the same hopeless way as in 3., the implicit shebang cleanup (see 1.) saves the day.
* "Fix" in the sense that this audacious hack will work in a lot more cases. If you are in full control of your PHP env., it can be just fine (as is in my case). Otherwise, it can still break in lots of subtle ways unexpectedly (consider auto-prepended code, custom extensions, or other possible ways to control output buffering etc.). Also, for example, when included from other scripts in CLI mode (where there's no buffering), you are still out of luck: the shebang will show up in the output, no matter what (unless, of course, filtered out manually by the caller). Not only that, but it would also break up your own buffering, if you happened to have any, while including such a naughtified script.
UPDATE: Just for the fun of it, here's an "almost correct" version, which plays along nicely with an ongoing buffering, be it implicit or user-level:
#!/usr/bin/php
<?php
if (ob_get_level()) {
$buf = ob_get_clean();
ob_start();
// Refill the buffer, but without the shebang line:
echo substr($buf, 0, strpos($buf, file(__FILE__)[0]));
} // else { out of luck... }
Still only "almost" correct, as nothing can fix output_buffering = 0 in web mode, and the "being included with no buffering" case can only be solved if the calling script adds an explicit ob_start - ob_end_... wrapping. Also, most of the caveats above still apply: various subtleties can still break it (e.g. the current output buffer must have the (fortunately default) PHP_OUTPUT_HANDLER_CLEANABLE flag etc.).)
I generally find it a good idea to separate logic from presentation. When I do something like this, I put as much as possible in a library, and then write separate cli and web interfaces for it.
That said, calling it with the php command is probably an easier fix.
Call the script using the php command
The output buffering solution above is a hack. Don't do that.
First thing, you are actually better using the env command to determine which php is being used:
#!/usr/bin/env php
Then give it permission to be executed by itself:
chmod +x myfile
So instead of calling 'php myfile', you now just run:
./myfile
From that folder. Hope this helps!

How can I troubleshoot why my PHP script won't work in cron when it does from the command line?

I've got a script that calls two functions, A and B, from the same class. A creates an Amazon virtual server and B destroys one, both via shell_exec()'s of Amazon's command line tools. The script, doActions.php, pulls actions from a queue. If the action is "create" it creates an instance; when the action is "destroy" it kills one.
The script works fine to execute both A and B when I execute it from the command line: php script.php.
When I put it on a cron, it runs but only successfully runs the B function. It deletes destroys instances but won't create them.
The point of failure is clearly function B. It chokes at the first and most important shell_exec, returning and echoing nothing.
echo $string = shell_exec('/home/user/public_html/domain.com/private/ec2-api-tools/bin/ec2-run-instances ami-23b6534a -k gsg-keypair -z us-east-1a');
Unless you know something specific about the way Amazon's command line tools work, please suggest to me reasons why a shell_exec might work in one case and not the other.
Another shell_exec in the same place behaves as expected:
echo $string = shell_exec ('echo overflow');
My guess is that it has to do something with permissions. But when I have it run shell_exec('whoami') it return "root," and when I su and run the command it works fine. I'm having a hard time thinking of creative ways to troubleshoot why my PHP script won't work in cron when it does from the command line. Can you suggest some?
When something runs from the command line but refuses to do so within cron, it's often an environment issue (path or some other environment variable that's needed by the code you're running).
For a start you should modify the script to output the current environment (shell_exec('env')?) at the very top and examine the output from the command line and cron.
Hopefully, there will be something obvious such as AMAZON_EC2_VITAL_VAR but, if not, you should move the cron environment towards your command line one, one variable at a time, until it starts working.
A quick test to ascertain this. From your command line, do:
env >/tmp/pax_env.sh
Then run your PHP script from a shell script which first executes:
. /tmp/pax_env.sh
so that the environments are identical.
And keep in mind that su on its own doesn't give you the same environment as you'd get from logging in directly as a specific user (su - does, I think). You may want to check the behaviour for when you log in as root directly.
Re your comment:
Yes, I do believe you've got it. I'm likely going to mark your answer as correct but need you to suffer through a few addendums about your clever solution. First of all, what's the best way to execute the pax_env.sh script? Does shell_exec() work?
Never let it be said I didn't work for my money :-) No. The shell_exec will almost certainly be running a sub-shell so the variables would be set in that sub-shell but would not affect the PHP parent process.
My advice, if you wanted all those variables set, would be to create a shell-script consisting of all the commands in /tmp/pax_env.sh (probably prefixing each with export) followed by the command you currently have running in cron, something along the lines of:
export PATH=.:/usr/bin
export PS1=Urk:
export PS2=MoreUrk:
/home/user/pax/scriptB.php
Then run that script from cron rather than /home/user/pax/scriptB.php directly. That will ensure the environment is set up before your PHP code is called.
Astute readers will have noticed the phrase "if you wanted all those variables set" above. I don't personally think it's a good idea to dump all your command line variables into the shell script for the cron job. I'd prefer to actually find out which ones are needed and only include those. That lessens the pollution your cron job has to run under. For example, it's unlikely that the PS1/PS2 prompt variables will be required for your PHP script.
If it works, you can set all the environment variables - I just prefer the absolute minimum so I don't have to worry too much when things change.
A way of finding out what's needed is to comment out one export at a time until your script breaks again. Then you know that variable is needed. Once it works with the maximum amount of export statements commented out, you can just delete those commented export statements altogether and what remains, however improbable, must be okay (with apologies to Sir Arthur Conan Doyle).

Categories