I have a php script running on OSX snow leopard. When I run it from the command line it throws
'Segmentation Fault'
. If I put an exit() at the end of the file, it doesn't throw the error. Why is the exit needed?
I've had segfaults like this when extensions don't play nicely together. In one case it was curl and pgsql, and swapping the order they were loaded made the problem go away. (Swap the extension.so lines in php.ini, or rename files (z-curl.ini, for example) if you have a conf.d setup.)
Without seeing code it's tough to answer. Try a divide and concur method of debugging: Disable all the code with a comment. See if the problem occurs. If not, enable the first half of the code. If the problem reappears, disable all but the first quarter of the code. Continue to divide and concur until you find the part of the script that's causing the problem.
My guess would be you're not closing some open handle. But again I'm just guessing as I can't see the code.
Related
I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.
So I have a php script which I execute using the following command:
php -f my_script.php myArguments
The script is under version control using svn. I just updated it, pasted the command to run it into a terminal, and executed it. However, there is no output. Not a failure message, not it printing anything, nothing. It looks like it never starts. Kind of like the following:
me:/srv/scripts# php -f my_script.php myArguments
me:/srv/scripts#
Other scripts will run just fine.
It is difficult for me to come up with an SSCCE, as I can't really share the code that is causing this, and I haven't been able to replicate this behavior intentionally. I have, however, seen this twice now. If I save my changes, revert the file, and paste them back in, there is a strong chance it will run just fine.
However, I am concerned by not knowing what is causing this odd behavior. Is there a whitespace character or something that tells PHP not to start, or output anything?
Here is what I've tried after seeing this behavior:
Modifying the script so it is a simple echo 'hello'
Putting nonsense at the beginning of the script, so it is unparseable.
Pasting in code from a working script
Banging my head on a wall in frustration
Trying it in another terminal/putty ssh connection.
Here's where it gets interesting: It actually works in a different terminal. It does everything as expected.
So does anyone have any ideas what might be causing this, or things I should try in order to determine the problem?
EDIT:
The "different terminal" is still the terminal application, just a new one.
I have sufficient permissions to execute the file, but even if I didn't, it should spit out a message saying I don't.
I intentionally introduced syntax errors in hopes that I would get PHP to spit out a parse error. There was still no output.
display_errors might be disabled before runtime. You can turn it on manually with the -d switch:
php -d display_errors=1 -f my_script.php myArguments
I came across the same issue, and no amount of coercing PHP to display_errors or checking for syntax with -l helped
I finally solved our problem, and perhaps you can find some help with this solution
Test your script without using your php.ini:
php -n test_script.php
This will help you hone in on the real cause - PHP configuration, someone else's script, or your script
In my case, it was a problem with someone else's script which was being added via the auto_prepend_file directive in the php.ini. (Or more specifically, several files and functions later as I drilled through all the code adding debug as I went - on a side note, you may find that using fwrite(STDOUT, "debug text\n"); invaluable when trying to debug this type of issue)
Someone had added a function that was getting run through the prepend file, but had used the # symbol to suppress errors on a particular function call. (You might have a similar problem but not specifically related to the php.ini if you have any includes in your test script bringing in other code)
The function was failing and caused the silent death of PHP, nothing to do with my test script
You will find all sorts of warnings about how using the # symbol causes the exact problem I had, and perhaps you're having, http://php.net/manual/en/language.operators.errorcontrol.php.
Reproduction of similar symptoms:
Take a fully functional PHP environment, and break your CLI output by adding this the top of your script
#xxx_not_a_real_function_name_xxx();
So you may just have a problem with the php.ini, or you (or someone else) may have used # not realising the serious (and frustrating and time consuming) consequences that it causes in debugging
I experienced PHP CLI failing silently on a good script because of a memory limit issue. Try with:
php -d memory_limit=512M script.php
I've got some PHP code that I want to run as a background process. That code checks a database to see if it should do anything, and either does it or sleeps for awhile before checking again. When it does something, it prints some stuff to stdout, so, when I run the code from the command line, I typically redirect the output of the PHP process to a file in the obvious way: php code.php > code.log &.
The code itself works fine when it's run from the shell; I'm now trying to get it to run when launched from a web process -- I have a page that determines if the PHP process is running, and lets me start or stop it, depending. I can get the process started through something like:
$the_command = "/bin/php code.php > /tmp/code.out &";
$the_result = exec($the_command, $output, $retval);
but (and here's the problem!) the output file-- /tmp/code.out -- isn't getting created. I've tried all the variants of exec, shell_exec, and system, and none of them will create the file. (For now, I'm putting the file into /tmp to avoid ownership/permission problems, btw.) Am I missing something? Will redirection just not work in this case?
Seems like permission issues. One way to resolve this would be to:
rename your echo($data) statements to a function like fecho($data)
create a function fecho() like so
.
function fecho($data)
{
$fp = fopen('/tmp/code.out', 'a+');
fwrite($fp, $data);
fclose($fp);
}
Blurgh. After a day's hacking, this issue is finally resolved:
The scheme I originally proposed (exec of a statement with
redirection) works fine...
...EXCEPT it refuses to work in /tmp. I
created another directory outside of the server's webspace and opened
it up to apache, and everything works.
Why this is, I have no idea. But a few notes for future visitors:
I'm running a quite vanilla Fedora 17, Apache 2.2.23, and PHP 5.4.13.
There's nothing unusual about my /tmp configuration, as far as I know (translation: I've never modified whatever got set up with the basic OS installation).
My /tmp has a large number of directories of the form /tmp/systemd-private-Pf0qG9/, where the latter part is a different set of random characters. I found a few obsolete versions of my log files in a couple of those directories. I presume that this is some sort of Fedora-ism file system juju that I will confess to not understanding, and that these are orphaned files left over from some of my process hacking/killing.
exec(), shell_exec(), system(), and passthru() all seemed to work, once I got over the hump.
Bottom line: What should have worked does in fact work, as long as you do it in the right place. I will now excuse myself to take care of a large bottle of wine that has my name on it, and think about how my day might otherwise have been spent...
I've stumbled upon a phenomenon that I can't explain myself.
I'm using popen to execute php and then execute a php script that way, and pclose to close that.
So far so fine. I ran into quite some severe troubles as the script where I used this didn't execute and instead after trying it 3 times in a row I crashed the zend-server (no page would open any more). I found out that the reason to this was that I used a wrong directory for the php.exe. Example:
if (pclose(popen("C:\wrongDir\php\php.exe C:\Zend\Apache2\htdocs\myApp\public\mytest.php 57 > C:\Logs\1\0\jobOut.log 2> C:\Logs\1\0\jobErr.log"))>-1)
{
.....
}
Aside from the "wrongDir" all other dirs were correct....the popen even created the jobOut and jobErr files (which were empty). (remark: PHP is not in a searchpath that is why it wasn't found without the correct path)
Even though I now solved the problem....I have the question if this is a normal behaviour there, or if I have done something wrong (maybe even server settings). As from what I read in the manual about both commands it sounded to me that in the my case I should have had a return value of either -1 or 0 and not the problem I ran into with the process and then the server hanging).
Thanks.
It appears that pclose() doesn't return the exit status of the process but rather of it's ability to close the process.
To get the process termination 'code' use pcntl_wifexited() and pcntl_wexitstatus()
http://php.net/manual/en/function.pclose.php
http://php.net/manual/en/function.pcntl-wexitstatus.php
I am using this php code:
exec("unrar e file.rar",$ret,$code);
and getting an error code of illegal command ie 127 ... but when I am using this command through ssh its working ... because unrar is installed on the server ... so can anyone guess why exec is not doing the right stuff?
Try using the direct path of the application (/usr/bin/unrar of whatever), it sounds like php can't find the application.
If you have chrooted apache and php, you will also want to put /bin/sh into the chrooted environment. Otherwise, the exec() or passthru() will not function properly, and will produce error code 127, file not found.
Since this comes up as a top answer in google, I wanted to share my fix:
The simple fix I had was to disable safe_mode in the php.ini file
; Safe Mode
; http://www.php.net/manual/en/ini.sect.safe-mode.php#ini.safe-mode
safe_mode = Off
thanx all for your response!!
I tried this
//somedir is inside the directory where php file is
chdir("somedir");
exec("/home/username/bin/unrar e /home/path/to/dir/file.rar");
and now it returned no exit code ... oher commands are doing file .. i tried mkdir etc .. :s
Just in case somebody else still gets this problem, take a look at the post here:
http://gallery.menalto.com/node/2639#comment-8638
Quote:
I found the problem. The problem was my security-paranoid OpenBSD. When upgrading from 3.1 to 3.2 they added:
Apache runs chroot'd by default. To disable this, see the new -u option.
The chroot prevented Apache from accessing anything outside of a directory, so I moved everything into the apache directory including netpbm. Everything was accessible and executable, but I guess it was still in some sort of "safe mode" because the exec() always returned 127.
Anyway, running httpd with the -u option went back to the less secure non chroot'd apache startup, which allowed the exec() to work again.
ohkiee guyz thanx ... and yes there might be some errors with $PATH ... but with given full path its working :)
exec("/home/user/bin/unrar e /home/user/xxx/yyy/file.rar");
I did not find the solution for my problem of the same type so sharing what was the cause of it in my Linux setup.
For whatever reason I needed an apache module loaded before other modules and so apache was started with LD_PRELOAD. As exec inherits the environment of the parent process LD_PRELOAD was used for starting the exec-ed program (through sh). Now the preloaded module uses some bindings to apache functions and of course they are not to be present in sh. The result of the php exec was an exit status of 127. The solution was to have in my php file a putenv("LD_PRELOAD") that gets executed before any exec calls.