I am getting numerous errors exactly like this one:
Zend_Session_Exception: Session must be started before any output has been sent to the browser; output started in /usr/local/zend/share/pear/PHPUnit/Util/Printer.php/173
When running my application's test suite. This is with PHPUnit 3.5.10 and PHP 5.3.5.
There is no mysterious, unexpected whitespace output that is causing this. I've determined that the "output being sent to the browser" is the actual output from the PHPUnit tests being executed. If I open up PHPUnit/Util/Printer.php and wrap the print $buffer line with if (strpos($buffer, 'PHPUnit 3.5.10 by Sebastian Bergmann') === false) (effectively stopping the first line of output from PHPUnit), then my first test succeeds (until the test case outputs a dot indicating that the test succeeded, then the next test fails because the dot was output).
Another developer on my team is able to run the full test suite successfully, so I know it's not a problem with the application code. It must be some configuration setting or problem with my local environment.
I've already checked php.ini to verify that output_buffering is turned on and implicit_flush is turned off, and they are.
I've also tried adding Zend_Session::$_unitTestEnabled = true; to my test bootstrap, but that didn't help (and shouldn't be necessary anyway because it works on another developer's machine and on our CI server without it).
Any suggestions besides ignoring the errors? I've never seen anything like this and am truly at a loss.
Thanks!
UPDATE:
To attempt to further isolate the problem, I took ZF and my application out of the equation by executing the following test script:
<?php
class SessionTest extends PHPUnit_Framework_TestCase
{
public function testSession()
{
session_start();
$this->assertTrue(true);
}
}
The test fails:
1) SessionTest::testSession
session_start(): Cannot send session cookie - headers already sent by (output started at /home/mmsa/test.php:1)
However, the exact same test works on a friend's machine. Same version of PHP and PHPUnit.
Run phpunit with the -stderr flag, (newer versions may use --stderr instead) e.g.
phpunit -stderr mytest.php
# or
phpunit --stderr mytest.php
This directs phpunit's output to stderr, preventing it from interrupting HTTP header generation.
It's possible that the test works on your friend's machine because he has output buffering enabled (although I'm not sure if that's relevant in a CLI context).
I think better way is use
Zend_Session::$_unitTestEnabled = true;
Using this in my test bootstrap prevents from this error.
If the php binary being used by PHPUnit on your system is the CGI instead of the CLI version, then session_start is really going to try to set cookies and you'll get that error.
You can check to make sure what SAPI you're using by calling php_sapi_name.
I had the same problem with another project, and I found that the issue was PHPUnit causing output to start too soon, because it outputs it's welcome message before running your test.
Added the following two lines to bootstrap.php:
ini_set('session.use_cookies', 0);
ini_set('session.cache_limiter', '');
This should prevent the headers from being sent before your test suite runs.
Like Makor said, you just need to add this in your bootstrap.php
Zend_Session::$_unitTestEnabled = true;
In my case, I put something like in my bootstrap.php file
if(APPLICATION_ENV == 'testing' && php_sapi_name() == 'cli') {
Zend_Session::$_unitTestEnabled = true;
}
So you don't have to change it anymore.
Related
Let me start by saying I am totally not a PHP programmer - this was dumped on me to fix.
I have a function that sits in a file by itself, that basically looks like this:
<?php
function UploadFile($source, $destination){
$debugLogPath = '/biglongpath/debug.log';
file_put_contents($debugLogPath,PHP_EOL . "Beginning UploadFile function", FILE_APPEND);
set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib');
require_once('Net/SFTP.php');
...Rest of the ftp code here...
}
?>
It's using phpseclib. If I run the main PHP script (that calls this function...) via a web browser, everything works great. When I run that same script via a CRON job, it dies as soon as this function is called. I've verified this by writing out a debug log right before calling the function - the first statement before the function is written to the log, but the "Beginning UploadFile function" is never written.
I'm guessing that maybe it has something to do with the require_once statement - maybe either a path problem or permissions issue when it's executed via CRON?
I've tried wrapping the entire function contents in a try/catch and writing out the Exception, but it still just dies.
I wonder why there are 3 helpful flags, when the question states, that the file is being written? However, this is not the CLI error log and therefore will not automagically log any errors there.The second one suggestion made appears more likely, while there are even more possibilities:
Make sure that these modules are being loaded for the PHP-CLI:
libsodium, openssl, mcrypt, gmp (as the composer.json hints for).
Running php --ini should show which INI files were loaded. Even if the corresponding INI files are there, make sure the instructions inside them are not commented out with a ;.
Manually running the script from CLI, as the user which runs the cronjob suggested, with error reporting enabled. If this shouldn't help, single-step into it with xdebug, to see where exactly it bugs out (NetBeans, Eclipse, VS Code and a few other IDE do support PHP debugging). This requires some effort to set it up, but then it provides a far better debugging methodology.
I have a backup script that runs from the browser without a problem. It extracts data from the database and writes it to a ZIP file that's under 2MB .
It mostly runs from the server, but it fails (silently) when it hits a particular line:
require ('/absolute-path/filename'); // pseudo filespec
This is one of several such statements. These are library files that do nothing but 'put stuff in memory'. I have definitely eliminated any possibility that the path is the problem. I'm testing the file with a conditional is_readable(), output it, and sent myself emails.
$fs = '/absolute-path/filename'; // pseudo filespec
if (is_readable ($fs) ) {
mail('myaddress','cron','before require'); // this works reliably
require ($fs); // can be an empty file ie. <?php ?>
mail('myaddress','cron','after require'); // this never works.
}
When I comment out the require($fs), the script continues (mostly, see below).
I've checked the line endings (invisible chars). Not on every single include-ed file, but certainly the one that is running has newline (NL) endings (Linux-style), as opposed to newline + carriage return (NL CR) (Windows style).
I have tried requiring an empty file (just <?php ?>) to see if the script would get past that point. It doesn't.
I have tried calling mail(); from the included script. I get the mail. So again, I know the path is right. It is getting executed, but it never returns and I get no errors, at least not in the PHP log. The CRON job dies...
This is a new server. I just migrated the application from PHP 5.3.10 to PHP7. Everything else works.
I don't think I am running out of memory. I haven't even gotten the data out of the database at this point in the script, but it seems like some sort of cumulative error because, when I comment out the offending line, the error moves on to another equally puzzling silent failure further down the code.
Are there any other useful tests, logs, or environment conditions I should be looking at? Anything I could be asking the web host?
This usually means that there is some fatal error being triggered in the included file. If you don't have all errors turned on, PHP may fail silently when including files with certain fatal errors.
PHP 7 throws fatal errors on certain things that PHP 5.3 did not, such as Division by Zero.
If you have no access to server config to turn all errors on, then calling an undefined function will fail silently. You can try debugging by putting
die('test');
__halt_compiler();
at the beginning of a line, starting from the top, on the line after the first <?php tag and see if it loads. If it does slowly displace line by line (though don't cut a control structure!) and retest after each time and when it dies you know the error is on the line above.
I believe the problem may be a PHP 7 bug. The code only broke when it was called by CRON and the 'fix' was to remove the closing PHP tag ?>. Though it is hard to believe this could be an issue, I did a lot of unit testing, removing prior code, etc. I am running PHP 7.0.33. None of the other dozen or so (backup) scripts broke while run by CRON.
As nzn indicated this is most likely caused by an error triggered from the included file. From the outside it is hard to diagnose. A likely case is a relative include/require within that file. A way to verify that is by running the script on console from a different location. A f might be to either call cd from cron before starting PHP or doing a chdir(__DIR__) within the primary file before doing further includes.
This particular PHP file works perfectly when executed via the browser. However, I'd like it to run on task scheduler in Windows so I set the scheduler to launch php.exe and point it to the correct file.
Task scheduler is basically doing the same thing as if I type it directly into the CLI I believe. Now, it seems to have worked a few times but now it repeatedly fails even when I manually call the task via CLI.
The relevant code is:
include_once("simple_html_dom.php");
$results = ....Some CURL Commands to retrieve data....
$html = str_get_html($results);
foreach($html->find('tr') as $tr)
{
....do stuff....
}
In CLI it says
Fatal error: Call to a member function find() on a non-object in C:\php\report.php on line...
Why does CLI find fault here and browser does not? Again, this has worked once or twice on CLI so it might some kind of time-out setting.
When you run the script on CLI, did you check if file_get_html() is returning FALSE?
If that is the case, maybe the script can't reach the resource from the terminal using curl for some reason (e.g.: proxy settings).
Make sure to check that case on what you get from that function with something like:
$html = str_get_html($results);
if ($html !== FALSE) {
// treat the success case.
}
All your answers led me to figure out the problem. I investigated the permissions angle but that didn't solve it. There is another 'include' file I have called common_functions.php which I also include. Permissions on that also didn't solve the problem.
However, the curl function is actually located in common_functions. Upon investigating that file, it contains references to cookies.txt where the path was not absolute. I had not setup my environment variables correctly so CLI could not find the cookie which made the Curl function fail.....I corrected and it works now.
Lesson learned. Thank you all for the clues you provided.
The welcome.js code is given below
console.log('Wellcome');
and the php file code is given below
$op = shell_exec('node welcome.js').PHP_EOL;
echo $op
If I run the php file in command line it print Wellcome but when I run from browser it does not print any output.
There are most likely errors here that you're not seeing.
Set 'error_reporting' to -1 and 'display_errors' to 1 in your php.ini and be sure to restart your webserver/fastcgi-listeners. This is more reliable than using ini_set() and error_reporting() in the script, which will fail if there are parse errors...see php:errorfunc.configuration for more detail.
Check the appropriate error log (depending on your settings and the Server API, this can be the httpd's error log, syslog, some independent file, or even going nowhere right now). Again, php:errorfunc.configuration can help you get things configured correctly, or suss out the current configuration.
The $PATH (or %PATH% on Win32) for an interactive login session is usually dramatically different than that of a running daemon. Try specifying the full path to the node binary.
I don't know off-hand which file-handle node's "console.log()" goes out to. Assuming you're using a bourne-style shell (such as bash) for the subshell here, you can try piping stderr to stdout, using something like: $op = shell_exec('/foo/bin/node welcome.js 2>&1').PHP_EOL; echo $op;
Make sure that 'welcome.js' is where you think it is in relation to the current working directory of your PHP process (although it's likely that node would warn you via one of the previous suggestions if this were not the case, it seemed worth pointing out as a potential pitfall.)
Is it possible within a PHP script to start a trace log and activate a debugging log.
I am not looking for eclipse + xdebug, but something like this use-case:
When script starts, it checks if $_GET["debugme"] is set. If yes, say start_trace_log().
Anything that happens after that in the rest of the script, should be logged, e.g.
scriptA.php :10 include("anotherscript.php")
anotherscript.php:1 foo()
...
At the moment, I have to manually do this for any script that i am interested to log and everywhere the script has to check $_GET["debugme"] instead of simply debugging ALL within this script run. Very uncomfortable for ocassionally checking scripts.
Any better ideas or comfortable ways of tracing php scripts from a start point to the last line?
Add this line to the end of the end or footer script:
if(isset($_GET["debugme"]))debug_print_backtrace();
that will print details like #... function-name() called at script-path.php:linenumber.
Don't forget to estrict the debugme feature to run on development system only!
phptrace may be a better choice cause you needn't to change your script and you can trace at anytime you want.
Although I find it highly annoying when this is used in production, you can throw this bit of code:
if(isset($_GET['DEBUGME'])) {
start_trace_log();
}
Into a file, and then adding that file to your PHP auto_prepend_file so it gets run at the start of every PHP script.
Of course, this is assuming that you've already coded and included start_trace_log(). You should also only include this on a development server. Scripts with debug flags shouldn't make it to production.