PHP Exec freeze on run command with "head" modifier - php

I have a directory with a lot of subdirectories and files in it.
I'm running this command using php's exec function:
exec('find /path/to/dir -type f | head -n 300');
On ssh this commands gives me result faster than eye blink.
When I'm running it using php's exec function script freeze and processes looks like:
sh -c find /path/to/dir [...] | head -n 300
|_ find /path/to/dir [...]
So it's look like script is looking for all files in this directory and after he will just cut and return first 300 of these.
Where is problem? Why it's working well in terminal?

Related

Tailing named pipe in Docker to write to stdout

My Dockerfile:
FROM php:7.0-fpm
# Install dependencies, etc
RUN \
&& mkfifo /tmp/stdout \
&& chmod 777 /tmp/stdout
ADD docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
as you can see I'm creating a named pipe at /tmp/stdout. And my docker-entrypoint.sh:
#!/usr/bin/env bash
# Some run-time configuration stuff...
exec "./my-app" "$#" | tail -f /tmp/stdout
My PHP application (an executable named my-app) writes its application logs to /tmp/stdout. I want those logs to then be captured by Docker so that I can do docker logs <container_id> and see the logs that the application wrote to /tmp/stdout. I am attempting to do this by running the my-app command and then tailing /tmp/stdout, which will then output the logs to stdout.
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it. This is confirmed if I do docker exec -it <container_id> bash, and then do tail -f /tmp/stdout myself inside the container. Once I do that, the container immediately exits because the application has written its logs to the named pipe.
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Can anybody tell me why this isn't working, and what I need to change? I expect I have to change the exec call in docker-entrypoint.sh, but I'm not sure how. Thank you!
What I'm seeing happen is that when I run my application, it hangs when it writes the first log message. I believe this happens because there is nothing "reading" from the named pipe, and writing to a named pipe blocks until something reads from it.
This is correct, see fifo(7). But with your example code
exec "./my-app" "$#" | tail -f /tmp/stdout
this should actually work since the pipe will start ./my-app and tail simultaneously so that there is something reading from /tmp/stdout.
But one problem here is that tail -f will never terminate by itself and so neither your docker-entrypoint.sh/container. You could fix this with:
tail --pid=$$ -f /tmp/stdout &
exec "./my-app" "$#"
tail --pid will terminate as soon as the process provided by id terminates where $$ is the pid of the bash process (and through exec later the pid of ./my-app).
For reasons that I won't bloat this post with, it's not possible for my-app itself to write logs to stdout. It has to use /tmp/stdout.
Does this mean it has to write to a filesystem path or is the path /tmp/stdout hardcoded?
If you can use any path you can use /dev/stdout / /proc/self/fd/1 / /proc/$$/fd/1 as logging path to let your application write to stdout.
If /tmp/stdout is hardcoded try symlinking it to /dev/stdout:
ln -s /dev/stdout /tmp/stdout

PHP to exec casperjs/phantomjs script

I'm having trouble using PHP to execute a casperjs script:
<?php
putenv("PHANTOMJS_EXECUTABLE=/usr/local/bin/phantomjs");
var_dump(exec("echo \$PATH"));
exec("/usr/local/bin/casperjs hello.js website.com 2>&1",$output);
var_dump($output);
Which results in the following output:
string(43) "/usr/gnu/bin:/usr/local/bin:/bin:/usr/bin:."
array(1) {
[0]=>
string(36) "env: node: No such file or directory"
}
The only stackoverflow posts I could find hinted that there's a problem with my paths, and that maybe the PHP user can't access what it needs.
I have also tried the following: sudo ln -s /usr/bin/nodejs /usr/bin/node
Does anyone know what I would need to do or change for this error to resolve?
Thanks
My guess is you have something, somewhere, that assumes node is installed.
First, are you running php from the commandline? I.e. as php test.php in a bash shell. If so, you can run the commands, below, as they are. If through a web server the environment can be different. I'd start with making a phpinfo(); script, and then run the troubleshooting commands through shell_exec() commands. But, as that is a pain, I'd get it working from the commandline first, and only mess around with this if the behaviour is different when run through a web server. (BTW, if you are running from a cron job, again, the environment can be slightly different. But only worry about this if it works from commandline but does not work from cron.)
Troubleshoot hello.js
The easy one. Make sure your script does not refer to node anywhere. Also remember you cannot use node modules. So look for require() commands that should not be there.
Troubleshoot your bash shell
Run printenv | grep -i node to see if anything is there. But when PHP runs a shell command, some other files get run too. So check what is in /etc/profile and ~/.bash_profile . Also check /etc/profile.d/, /etc/bashrc and ~/.bashrc. You're basically looking for anything that mentions node.
Troubleshoot phantomjs/casperjs
How did you install phantomjs and casperjs? Are the actual binaries under /usr/local/bin, or symlinks, or are they bash scripts to the . E.g. on my machine:
cd /usr/local/bin
ls -l casperjs phantomjs
gives:
lrwxrwxrwx 1 darren darren 36 Apr 29 2014 casperjs -> /usr/local/src/casperjs/bin/casperjs
lrwxrwxrwx 1 darren darren 57 Apr 29 2014 phantomjs -> /usr/local/src/phantomjs-1.9.7-linux-x86_64/bin/phantomjs
And then to check each file:
head /usr/local/src/casperjs/bin/casperjs
head /usr/local/src/phantomjs-1.9.7-linux-x86_64/bin/phantomjs
The first tells me casper is actually a python script #!/usr/bin/env python, while the second fills the screen with junk, telling me it is a binary executable.

shell_exec not returning the same result as sudoed command line

I'm developping a PHP-FPM driven module in which in upload videos, then transcode them into several HTML5 formats in the background with ffmpeg. This PHP-FPM script runs under a specific, non-root UID, called tv25.
There is a variant in which I record a webcam stream through a Streaming Server (Wowza), which runs under the root UID, and launches the conversion through Java-written module.
In order to know the status of the processes I make a GET request to a script which runs the following function :
function is_conversion_running($base_file_name) {
$command = "sudo ps aux | grep {$base_file_name} | grep -v grep | wc -l";
$lignes = shell_exec($command);
return (bool) $lignes;
}
When I call this function through AJAX, it works for the PHP-FPM variant (the UID is the same, returns true while the conversion is running), but not with the Wowza variant (return false everytime).
The strange thing is that if I run the command in a shell, with the non-root UID, it works like a charm, since the ps command as been allowed to be run by this UID.
The problem seems similar to the one in shell_exec returns empty string, but the solution listed there doesn't work for me.
My /etc/sudoers line is like this :
tv25 ALL = (root) NOPASSWD: /bin/ps
Really can't figure out what is the deal...
What does the command return: 0 or NULL? In the second case the command probably failed alltogether. You can check with the exec function whether you get a non-zero exit code. Make sure to prefix your command with /bin/sh -c in that case.
PS: Do you really need the sudo for running ps? Normally you get all processes even without sudo.
Well I found another way to sikve my problem : since I want to know if the process is still running, I delegated the command in a shell script :
#!/bin/bash
BASE_NAME=`basename $0`
LIGNES=$(/usr/bin/sudo /bin/ps aux | grep "$1" | grep -v grep | grep -v $BASE_NAME | wc -l)
[ $LIGNES -eq "0" ] && exit 1
exit 0
And then I call it with passthru. Its return value parameter is then converted to boolean, negated, and returned by the function.
~

How to get recursive directory path using inotify-tools in terminal

I'm using inotify-tools where i want a notification of file which has been created in recursive directories
Till here i'm succesfull
Now i want to get directory path where a file is been created/dumped in the recursive folders
for e.g abc.txt file is dumped in data/test folder
i want the path to be data/test/abc.txt
Below is the code im using in .sh file
inotifywait -m -r --format '%f' -e modify -e move -e create -e delete /var/www/cloud/data | while read LINE;
do
php /var/www/cloud/scannner/watcher.php;
done
Kindly please help me to get the path of a dumped file in recursive directories
Cheers
Use the %w modifier:
inotifywait -m -r --format '%w%f' .......
To pass the output of inotifywait as an argument to a php script, which will read it for the argv variable you could do this:
inotifywait -m -r --format '%w%f' ....... | while read -r line
do
php script.php "$line"
done
Otherwise, if you want the php script to read the output of inotifywait from the standard input, then you can just pipe to your script:
inotifywait -m -r --format '%w%f' ....... | php script.php

rsync in bash not parsing php-generated --exclude-from file

My rsync bash script isn't parsing my --exclude-from file that's generated via php, but it will if I manually create (as root) the same exact file locally. I've got a web interface on a Xubuntu 12.10 system that writes rsync --exclude-from files locally and then pushes them via rcp to our (CentOS 6) backup boxes that run the rsync script. (Please spare finger wagging about rcp... I know--don't have a choice in this case.)
Webpage writes file:
PHP:
file_put_contents($exclfile, $write_ex_val);
then pushes to the backup box from a local bash script on the webserver with:
Bash:
rcp -p /path/to/file/${servername}_${backupsource}.excl ${server}:/destination/path
I've compared the permissions and ownership of the hand-created file (that works) with the same file that's php/rcp'd (that doesn't), and they're both the same:
Bash:
stat -c '%a' server_backupsource_byhand.excl
644
stat -c '%a' server_backupsource_byphp.excl
644
ls -l server_backupsource_byhand.excl
-rw-r--r-- 1 root root 6 May 11 05:57 server_backupsource_byhand.excl
ls -l server_backupsource_byphp.excl
-rw-r--r-- 1 root root 6 May 11 05:58 server_backupsource_byphp.excl
In case it's relevant, here's my rsync line:
BASH:
rsync -vpaz -v --exclude-from=${exclfile} /mnt/${smbdir} /backup
I suspect php might be writing the file in a different format (e.g. UTF8 instead of ANSI), but I can't figure out how to test this, and have limited knowledge here.
Does anyone have any suggestions on how to get the php/rcp generated file to parse?
Using diff and file I found out that the php-written version wasn't writing the newline. In my variable definition I was using a "." as "$write_ex_val ." Removing this "." let php write the new lines. I also removed the space between the variable and "\n", although I'm not sure if this contributed to the solution. I'd upvote the comment, Earl, but I don't think I have enough rep. Thanks again.

Categories