How to set maximal debugging/log level output from PHP command line? - php

I am starting a PHP daemon from command line in env A successfully. Yet, in env B, it stops right after being started. Same code. No error message.
I was wondering how I could set the maximum debug/log level output when launching the daemon. Is there a specific parameter I can pass? I am willing to collect more information than nothing.

You can enable reporting of all messages like this:
$ php -d error_log= -d log_errors=1 -d error_reporting=E_ALL e.php
HAI
file e.php contents
<?php
error_log('HAI');
You should also inspect the exit code of the daemon as that may be a standard unix exit code that would give you some clue.
php e.php ; echo $?
Refs
http://www.php.net/manual/en/errorfunc.configuration.php

Related

How run PHP script file in the background forever

My issue seems to be asked before but hold on, this one is a bit different.
I have 2 php files, I run the following commands:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
This will create basically 2 php processes.
My problem is that sometimes one of the files or even both of them are killed by the server for unknown reason and I have to re-enter the commands over again. I tried 'forever' but doesn't help.
If the server is rebooted I will have to enter those 2 commands, I thought about Cronjob but I'm not sure if it would launch it twice which would create more confusion.
My question is how to start automatically the files if one or both of them got killed? What is the best way to achieve this which would check exactly that file_1.php or that file_2.php is indeed running?
There are a couple of ways you can do this. As #Chris Haas mentioned in the comments, supervisord can do this for you, or you can run a watchdog shell script from cron at regular intervals that checks to see if your script is running, and if not, start it. Here's one I use.
#!/bin/bash
FIND_PROC=`ps -ef |grep "php queue_agent.php --env prod" | awk '{if ($8 !~ /grep/) print $2}'`
# if FIND_PROC is empty, the process has died; restart it
if [ -z "${FIND_PROC}" ]; then
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo queue_agent.php failed at `date`
cd ${DIR}
nohup nice -n 10 php queue_agent.php --env prod -v > ../../sandbox/logs/queue_agent.log &
fi
exit 0
I think u can try to figure out why these two php scripts shut down as the first step. To solve this problem, u can use this php function:
void register_shutdown_function(callback $callback[.mixed $parameter]);
which Registers a callback to be executed after script execution finishes or exit() is called. So u can log some info when php files get shut down like this:
function myLogFunction() {
//log some info here
}
register_shutdown_function(myLogFunction);
Instead of putting the standard output and error output into /dev/null, u can put them into a log file(Since maybe we can get some helpful info from the output). So instead of using:
nohup php file_1.php >/dev/null 2>&1 &
nohup php file_2.php >/dev/null 2>&1 &
try:
nohup php file_1.php 2>yourLog.log &
nohup php file_2.php 2>yourLog.log &
If u want to autorun these two php files when u boot the server, try edit /etc/rc.local(which is autorun when the os start up). Add your php cli command lines in this file.
If u can't figure out why php threads get shut down, try supervisor as #Chris Haas mensioned.

PHP isn't executing ps command (possible permission issue?)

I have a small script I tested on the command line using php test.php.
test.php
<?php
exec('ps -acux | grep test', $testvar);
print_r($testvar);
?>
This works fine. I am able to run the script and get the desired result. However, when I add the code to a file being run by my PHP server, the result is empty.
My OS is FreeBSD. Looking at the man page for ps the only restriction I see is on the -a option. It states:
If the security.bsd.see_other_uids sysctl is set to zero, this option
is honored only if the UID of the user is 0.
My security.bsd.see_other_uids is set to 1.
$ sysctl security.bsd.see_other_uids
security.bsd.see_other_uids: 1
The only thing I can think of is that the command is being run by my user when I run it via the command line whereas when run by PHP, it's being run by www. I'm not seeing anything in the manual of ps that indicates www shouldn't be able to run the command.
As per the comments you need to redirect stderr to stdout. This is achieved by 2>&1 to the end of your command. So an example of the PHP exec() is as follows:
exec('ps -acux | grep test 2>&1', $testvar);
This allows you to see potential errors. In my case there was no error and likely a bug of some sorts. However this is useful in all cases where you execute a command. If any error were to occur you can retrieve it to see what is happening with your command.

Use PHP over HTTP to run background bash with trap on exit

I'm trying to use PHP to trigger a bash script that should never stop running. It's not just that the command needs to run and I don't need to wait for output, it needs to continue running after PHP is finished. This has worked other times (and the question has been asked already), the difference seems to be my bash script has a trap for when it's closed.
Here is my bash script:
#!/bin/bash
set -e
WAIT=5
FILE_LOCK="$1"
echo "Daemon started (PID $$)..."
echo "$$" > "$FILE_LOCK"
trap cleanup 0 1 2 3 6 15
cleanup()
{
echo "Caught signal..."
rm -rf "$FILE_LOCK"
exit 1
}
while true; do
# do things
sleep "$WAIT"
done
And here is my PHP:
$command = '/path/to/script.sh /tmp/script.lock >> /tmp/script.log 2>&1 &';
$lastLine = exec($command, $output, $returnVal);
I see the script run, the lock file get created, then it exits, and the trap removes the lock file. In my /tmp/script.log I see:
Daemon started (PID 55963)...
Caught signal...
What's odd is that this only happens when running the PHP via Apache. From command line it keeps running as expected.
The signal on the trap that's being caught is 0.
I've tried wrapping my command in a bash environment, like $command = '/bin/bash -c "' . addslashes($command) . '"';, also tried adding nohup to the beginning. Nothing seems to be working. Is this possible to do for a never ending script?
Found the problem thanks to #lxg.
My # do things command was giving errors, which was causing the script to exit. For some reason they were suppressed.
When removing set -e from the beginning of my bash script I started seeing the errors output to my log file. Not sure why they didn't show up before.
The issue was in my bash loop it was running PHP commands. Even though my bash user and Apache user are the same, for some reason they had different $PATHs. This meant that when running on command line I was using a PHP7 binary, but when Apache trigged bash commands it was using a PHP5 binary (even though Apache itself is configured to use PHP7). So the application errored out and that is what caused the script to die.
The solution was to explicitly set the PHP binary path in my bash loop.
I was doing this with
BIN_PHP=$(which php)
But on true command line it would return one value (/path/to/php7/bin/php) vs command line initiated by Apache (/path/to/php5/bin/php). Despite Apache being the same as a my command line user, it didn't load the ~/.bashrc which specified my correct PHP path.

How do I execute multiple PHP files in single shell line?

I have some PHP files. Each of them start a socket listener, or run an infinite loop. The scripts halt when are executed via php command:
php sock_listener.php ...halt there
php listener2.php ... halt there
...
Currently I use screen command to start all the listener PHP files every time the machine is rebooted. Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
Using screen
Create a detached screen session for the first script:
session='php-test'
screen -S "$session" -d -m -t A php a.php
where -d -m combination causes screen to create a detached session.
Run the rest of the scripts in the same session in separate windows:
screen -S "$session" -X screen -t B php b.php
screen -S "$session" -X screen -t C php c.php
where
-X sends the built-in screen command to the running session;
-t sets the window title.
The session will be available in the output of screen -ls command:
There is a screen on:
8951.php-test (Detached)
Connect to the session using -r option, e.g.:
screen -r 8951.php-test
List the windows within the screen session with Ctrl-a " shortcut, or windowlist -b command.
Forking Processes to Background
A less convenient way is to send the commands to background by appending an ampersand at the end of each command:
nohup php a.php 2>a.php.err >a.php.out &
nohup php b.php 2>b.php.err >b.php.out &
nohup php c.php 2>c.php.err >c.php.out &
where
nohup prevents termination of the commands, if the user logs out of the shell. Read this tutorial for more information;
2>a.php.err redirects the standard error to a.php.err file;
>a.php.out redirects the standard output to a.php.out file.
Is there a way I can start all the listener PHP files in single shell line so that I can write a shell script to make it easier to use?
You can put the above-mentioned commands into a shell script file, e.g.:
#!/bin/bash -
# Put the commands here
make it executable:
chmod +x /path/to/script
and call it when you need it:
/path/to/script
Modify the shebang as appropriate.
Just run them under circus. Circus will let you define a number of processes and how many instances you want to run, and just keep them running.
https://circus.readthedocs.io/en/latest/

Cronjob linux server logging

I have some cronjobs running on my linux server. These cronjobs are just executing some PHP scripts. What I want to do is to log any possible outputs these scripts would have given.
I want to use the output as given by the command:
wget -O /logs/logfile /pathofthefolder/script.php
The problem is that this command overwrites the previous logfile, thus the logfile only contains the output of the last execution, which is kinda useless for logging.
I tried adding an -a for appending instead of overwriting, but that didn't work.
I also tried with only an -a like this:
wget -a /logs/logfile http://example.com/script.php
But also that didn't work, I get the information of the download in the logfile as such:
-2014-05-27 21:41:01-- http://example.com/script.php
Resolving example.com (example.com)... [ip address of my site]
Connecting to example.com (example.com)|[ip address of my site]|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `script.php.4'
0K
So the information of the HTTP request is being stored in the logfile, and the output of every request is saved in a seperate file with increasing numbers, script.php.1, script.php.2 and so on. Which isn't quite what I want, I'd prefer to have it all in one file, I don't need the HTTP info.
Update:
So I know that it would be easier via the php or lynx command, but those commands are not installed on the server. I'm kinda stuck with the wget.
You can use this:
wget -qO- "http://example.com/script.php?parameter=value&extraparam=othervalue" >> /logs/logfile
You might want to try rewriting your cronjobs to use the php interpreter instead of wget:
php -f /path/to/file
This will execute a local (!) php file and write the output to command line.
You can easily redirect this output to apppend to a file:
php -f /path/to/file >> /logs/logfile
(to overwrite instead of append, use a single >)
If you need the error messages as well, you need to redirect both stdout and stderr:
php -f /path/to/file >> /logs/logfile 2>&1
In case you don't have/cannot install the php executable, you can use (if installed) curl to get a similar result as with wget (see other answers):
curl http://localhost/your/file.php >> /logs/logfile

Categories