The following PHP code does return me a runtime of about 3.5 seconds (measured multiple times and averaged):
$starttime = microtime(true);
exec('/usr/local/bin/convert 1.pdf -density 200 -quality 85% 1.jpg');
$endtime = microtime(true);
$time_taken = $endtime-$starttime;
When i run the same command over a ssh terminal, the runtime is reduced to about 0.6 seconds (measured with the command line tool time).
The version of the imagemagick library is
Version: ImageMagick 6.7.0-10 2012-12-18 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2011 ImageMagick Studio LLC
Features: OpenMP
What could be the reason for this time difference?
One answer to a similar question here on stackoverflow was that the overhead comes from the Webserver having to start a thread/shell. Could this be really the reason? I thought threads are leightweight and don't take long at all to start/terminate.
Prior to calling exec i set the number of threads used by imagemagick (because this was/is a bug in OpenMP?, Reference) to 1 with exec('env MAGICK_THREAD_LIMIT=1');. The runtime from PHP does not change much, no matter what value i set for MAGICK_THREAD_LIMIT. Anyway there does not seem to be a bug in OpenMP on in this version because the runtime of the command line execution is ok.
Any suggestions of how i could improve the runtime of the above command would be greatly appreciated.
Thank you very much for your help.
When you log in to a Unix machine, either at the keyboard, or over ssh, you create a new instance of a shell. The shell is usually something like /bin/sh or /bin/bash. The shell allows you to execute commands.
When you use exec(), it also creates a new instance of a shell. That instance executes the commands you sent to it, and then exits.
When you create a new instance of a shell command, it has it's own environment variables. So if you do this:
exec('env MAGICK_THREAD_LIMIT=1');
exec('/usr/local/bin/convert 1.pdf -density 200 -quality 85% 1.jpg');
Then you create two shells, and the setting in the first shell never gets to the second shell. To get the environment variable into in the second shell, you need something like this:
exec('env MAGICK_THREAD_LIMIT=1; /usr/local/bin/convert 1.pdf -density 200 -quality 85% 1.jpg');
Now, if you think that the shell itself may be the problem, because it takes too long to make a shell, test it with something that you know takes almost no time:
$starttime = microtime(true);
exec('echo hi');
$endtime = microtime(true);
$time_taken = $endtime-$starttime;
At that point you know to try and find some way to make the shell instantiate faster.
Hope this helps!
I've been programming computers for over 56 years, but this is the first time I've encountered a bug like this. So I spent nearly a week trying to understand the 7X worse execution speed when executing a perl program from php via exec versus executing the perl program directly at the command line. As part of this effort, I also pored over all the times this issue was raised on the web. Here's what I found:
(1) This is a bug that was first reported in 2002 and has not been fixed in the subsequent 11 years.
(2) The bug is related to the way apache interacts with php, so both of those organizations pass the buck to the other.
(3) The bug is the same on exec, system, or any of the alternatives.
(4) The bug doesn't depend on whether the exec-ed program is perl, exe, or whatever.
(5) The bug is the same on UNIX and Windows.
(6) The bug has nothing to do with imagemagick or with images in general. I encountered the bug in a completely different setting.
(7) The bug has nothing to do with startup times for fork, shell, bash, whatever.
(8) The bug is not fixed by changing the owner of the apache service.
(9) I'm not sure, but I think it relates to vastly increased overhead in calling subroutines.
When I encountered this problem I had a perl program that would execute in 40 seconds but through exec took 304 seconds. My ultimate solution was to figure out how to optimize my program so that it would execute in 0.5 seconds directly or in 3.5 seconds through exec. So I never did solve the problem.
#Philipp since you have SSH and since your server allows access to the exec() I will assume that you also have full root access to the machine.
Recommended for single file processing
Having root access to the machine means that you can change the /etc/php5/php.ini memory limit settings.
Even without direct access to /etc/php5/php.ini you could check if your server supports overriding php.ini directives by creating a new php.ini file in you projects directory.
Even if the overrides are not permitted you can change your memory settings from .htaccess if AllowOverride is All.
Yet another means of changing the memory limit is by setting it during the PHP runtime using ini_set('memory_limit', 256);.
Recommended for batch file processing
The only good thing about running the convert via exec() is if you don't plan on getting a result back from exec() and allowing it to run asynchronously:
exec('convert --your-convert-options > /dev/null 2>/dev/null &');
The above approach is usually helpful if you're trying to batch process many files, you don't want to wait for them to finish processing and don't need confirmation regarding each having been processed.
Performance notes
Using the code above to make exec run async for processing a single file will cost more processor time and more memory than using GD/Imagick within PHP. The time/memory will be used by a different process that does not affect the PHP process (making the visitors feel the site moving faster), but the memory consumption exists and when it comes to handling many connections that will count.
This is not a PHP bug and has nothing to do with Apache/Nginx or any webserver.
I had the same issue lately and had a look at the PHP source code to check the exec() implementation.
Essentially, PHP's exec() calls Libc's popen() function.
The offender here is C's popen() which seems to be very slow. A quick google search on "c popen slow" will show you lots of questions such as yours.
I also found that someone implemented a function called popen_noshell() in C to overcome this performance problem:
https://blog.famzah.net/2009/11/20/a-much-faster-popen-and-system-implementation-for-linux/
Here is a screenshot showing the speed difference vs popen() and popen_noshell():
PHP's exec() uses the regular popen() - the one at the right of the screenshot above. The CPU used by the system when executing C's popen() is very high as you can see.
I see 2 solutions to this problem:
Create a PHP extension that implements popen_noshell
Request from the PHP team to create a new set of functions popen_noshell(), exec_noshell(), etc... which is unlikely to happen I guess...
Additional note:
While searching for this, I discovered the PHP function of the same name as the C's one: popen()
It is interesting because one can execute an external command asynchronously with: pclose(popen('your command', 'r'));
Which essentially has the same effect as exec('your command &');
I have experienced this problem, a graphic processing command that when run via the command line would take about .025 seconds was taking about .3 seconds when called via exec() in PHP. Upon much research, it seems that most people believe this to be a problem with apache or PHP. I then tried running the command via a CGI script, bypassing PHP altogether, and i got the same result.
It seemed therefore the problem must be apache, so i installed lighttpd, and got the same result!
After some thought and experimentation i realized this must be a problem with processor priority. So if you want your commands to run with a similar speed as the command line it must be executed as follows.
exec('echo "password" | sudo -S nice -n -20 command')
PLEASE NOTE: I know there will be all sorts of security objections to this. I just wanted to focus simply in the answer that all you have to do is add nice before your command.
When you call exec php does not create a thread, It creats a new child process. Creating a new process is big overhead.
However when you connect with ssh you are just passing a command to execute. You are not owner of that program so it executes as the user whom you connected with. For exec its the user who runs PHP.
Related
I have a php script which should use some preset system aliases. I.e. "alias ll=ls -l"
In terminal "ll" works but from php system("ll") outputs
sh: ll: command not found
How do I exit "sh" and execute my terminal commands?
P.S.: May be I missunderstood the basic linux components shell and bash. In this case, please correct me/the post
The PHP docs aren't clear about this, but presumably PHP's system is a reflection of Standard C's system(3), which hands the argument command to the command interpreter sh(1). If you want to avoid the shell, you'll need to use another feature of PHP besides system (like explicit fork/exec). That's context, but it won't help you solve your problem.
In your case it seems you just want the aliases in an rcfile. Scripts invoked by system calls aren't going to read your rcfile unless you take extraordinary steps to make that happen, and even if you do, it's going to be non-obvious to the maintenance programmer. I'd strongly advise you to put the alias in the script or command argument itself, or simply call the command explicitly (ls -al).
You can also manually source the rcfile from your script, or call system(csh -i 'yourcommands') to force csh to be invoked as an interactive shell (which should cause your rcfile to be read). I think this is a bad idea because it is effectively forcing the shell to behave inconsistently with its environment, but I'm including it here for completeness.
Most of the above I got from a quick read through the Startup and shutdown section of the csh manual on my Mac (Mavericks). There are other potential solutions there that I haven't laid out, as well.
I am trying to print generated forms / receipts through PHP (the printers will be installed on the server, I am not attempting to print to a user's local printer). I have decided to try the following methodology:
IN PHP:
Generate a PDF file and save it on the server.
Call a perl script to print said PDF file.
IN perl:
Use system() to "open" Reader and print the given PDF silently.
What works:
I can generate PDFs in PHP.
I can call a perl script.
If the script has errors, they report to the browser window. ie: If I purposely change file paths it fails, and reports the appropriate reason.
functions such as printf seem to work fine as the output displays in the browser.
The exact same perl script (with the "non-functioning" line mentioned below) works properly when executed from the command line or the IDE.
What doesn't work:
In perl: system('"C:\\Program Files (x86)\\Adobe\\Reader 10.0\\Reader\\AcroRd32.exe" /N /T "C:\\test.pdf" 0-XEROX');
What happens:
NOTHING! I get no errors. It just flat out refuses to open Adobe Reader. All code below this line seems to run fine. It's like the function is being ignored. I am at a loss as to why, but I did try a few things.
What I've tried:
Changed permissions of the AcroRd32.exe to Everyone - Full Control.
Output the $? after the system() call. It is 1, but I don't know what 1 means in this case.
Verified that there are no disable_functions listed in php (though I think this is unrelated as shell_exec seems to be working, since some of the perl code is ran).
Various other configurations that at least got me to the point where I can confirm that PHP is in fact calling the perl script, it just isn't running the system() call.
Other info:
Apache 2.2.1.7
PHP 5.35
Perl 5.12.3 built for MSWin32-x86-multi-thread
WampServer 2.1
I'm at a loss here, and while it seems like this is an Apache / permissions problem, I cannot be sure. My experience with Apache is limited, and most of what I find online is linux commands that don't work in my environment.
Try this:
my #args = ('C:/Program Files (x86)/Adobe/Reader 10.0/Reader/AcroRd32.exe');
if (system(#args) != 0) {
# Can't run acroread. Oh Noes!!!
die "Unable to launch acrobat reader!\n";
}
The thing about system() is that it does two different things
depending on the number and type(s) of argument it gets. If the
argument is an array or if there are multiple arguments, Perl assumes
the first is the program to run with the rest as its arguments and it
launches the program itself.
If, however it's just one string, Perl handles it differently. It
runs your command-line interpreter (typically CMD.EXE on Windows) on
the string and lets it do what it wants with it. This becomes
problematic pretty quickly.
Firstly, both Perl and the shell do various kinds of interpolation on
the string (e.g. replace '//' with '/', tokenize by space, etc.) and
it gets very easy to lose track of what does what. I'm not at all
surprised that your command doesn't work--there are just so many
things that can go wrong.
Secondly, it's hard to know for sure what shell actually gets run on
Windows or what changes Perl makes to it first. On Unix, it usually doesn't matter--every shell does more or
less the same with simple commands. But on Windows, you could be
running raw CMD.EXE, GNU Bash or some intermediate program that
provides Unix-shell-like behaviour. And since there are several
different ports of Perl to Windows, it could well change if you
switch.
But if you use the array form, it all stays in Perl and nothing else
happens under the hood.
By the way, the documentation for system() and $? can be found here and here. It's well worth reading.
I have StartServer.php file that basically starts a server if it is not already started. Everything works perfect except that the StartServer.php will hang forever waiting for the shell_exec()'d file Server.php to finish its execution that it never does.
Is there a way to execute a PHP file and just forget about it -- and not wait for its execution to finish?
Edit: It has to work on Windows and Linux.
This should help you - Asynchronous shell exec in PHP
Basically, you shell_exec("php Server.php > /dev/null 2>/dev/null &")
You'll need something like the pcntl functions. Only problem is that this is a non-windows extension, and not suitable to run inside a web server. The only other possibility I can think of is to use the exec function and fork the current process manually (as suggested in dekomotes's answer), doing OS detection to figure out the command needed. Note that this approach isn't much different to using the pcntl functions (in Linux, at least) - the ampersand character causes the second script to be run inside different process. Multi-threaded / multi-process programming isn't well supported in PHP, particularly when running inside a web server.
I think it's traditional to let the server detach itself form the parent process, ie to "daemonize" itself, rather than having the script starting the server detach itself. Check the server you're starting to see if it has a daemon-option.
If you've written the server yourself, in PHP, you need to detach it. It looks somehting like this:
posix_setsid(); //Start a new session
if(pcntl_fork()) {exit();} //Fork process and kill the old one
I think this works on Windows too. (Not tested.)
Hello I have a couple questions about PHP exec() and passthru().
1)
I never used exec() in PHP but I have seen it is sometimes used with imagemagick. I am now curious, what is some other common uses where exec is good in a web application?
2)
About 6 years ago when I first started playing around with PHP I did not really know anything, just very basic stuff and I had a site that got compromised and someone setup there own PHP file that was using the passthru() function to pass a bunch of traffic throught my site to download free music or video and I got hit with a 4,000$ bandwidth charge from my host! 6 years later, I know soo much more about how to use PHP but I still don't know how this ever happened to me before. How can someone beable to add a file to my server through bad code?
1] Exec() is really useful when you:
A) Want to run a program/utility on the server that php doesn't have a command equivalent for. For example ffmpeg is common utility run via an exec call (for all sorts of media conversion).
B) Running another process - which you can block or NOT block on - that's very powerful. Sometimes you qant a pcnt_fork though, or similar, along with the correct CL args for non blocking.
C) Another example is when I have to process XSLT 2.0 - I have to exec() a small java service I have running to handle the transformations. Very handy. PHP doesn't support XSLT 2.0 transformations.
2] Damn that's a shame.
Well, lots of ways. Theres a family of vulnerability called, "remote file include vulns", that basically allow an attacker to include arbitrary source and thus execute it on your server.
Take a look at: http://lwn.net/Articles/203904/
Also, mentioned above, say your doing something like (Much simplified):
exec("someUnixUtility -f $_GET['arg1']");
Well, imagine the attacker does, url.come?arg1="blah;rm -rf /", your code will basically boil down to:
exec("someUnixUtility -f blah; rm -rf /");
Which in unix, you separate commands w/the ; So yeah - that could be a lot of damage.
Same with a file upload, imagine you strip the last four chars (.ext), to find the extension.
Well, what about something like this "exploit.php.gif", then you strip the extension, so you have exploit.php and you move it into your /users/imgs/ folder. Well, all the attacker has to do now is browse to users/imgs/exploit.php and they can run any code they want. You've been owned at that point.
Use exec or when you want to run a different program.
The documentation for passthru says:
Warning
When allowing user-supplied data to be passed to this function, use escapeshellarg() or escapeshellcmd() to ensure that users cannot trick the system into executing arbitrary commands.
Someone had probably found a security hole in your script which allowed them to run arbitrary commands. Use the given functions to sanitise your inputs next time. Remember, nothing sent from the client can ever be trusted.
exec() allows you to use compiled code that is on your server, which would run faster than php, which is interpreted.
So if you have a large amount of processing that needs to be done quickly, exec() could be useful.
I have cron job - php script which is called one time in 5 minutes. I need to be sure that previously called php script has finished execution - do not want to mix data that's being processed.
There are three approaches I used to apply:
Creation of auxiliary text file which contains running-state flag. Executed script analyzes the contents of the file and breaks if flag is set to true. It's the simplest solution, but every time I create such script, I feel that I invented a bike one more time. Is there any well-known patterns or best-practices which would satisfy most of the needs?
Adding UNIX service. This approach is the best for the cron jobs. But it's more time consuming to develop and test UNIX service: good bash scripting knowledge is required.
Tracking processes using database. Good solution, but sometimes database usage is not encouraged and again - do not want to invent a bike, hope there is a good flexible solution already.
Maybe you have other suggestions how to manage single-processing of php scripts? Would be glad to hear your thoughts about this.
I'd recommend using the file locking mechanism. You create a text file, and you make your process lock it exclusively (see php flock function: http://us3.php.net/flock). If it fails to lock, then you exit because there is another instance running.
The advantage of using file locking is that if your PHP scripts dies unexpectedly or gets killed, it will automatically release the lock. This will not happen if you use plain text files for the status (if the script is set to update this file at the end of execution and it terminates unexpectedly, you will be left with untrue data).
http://php.net/flock with LOCK_EX should be enough in your case.
You could check wether or not your script is currently running using the ps command, helped by the grep command. "man ps" and "man grep" will tell you all about these unix/linux commands if you need informations about these.
Let's assume your script is called 'my_script.php'. This unix command :
ps aux | grep my_script.php
...will tell you if your script is running. You can run this command with shell_exec() at the start of your script, and exit() if it's already running.
The main advantage of this method is that it can't be wrong, where the script could have crashed, leaving your flag file in a state that would let you think it's still running.
I'd stick to version number 1. It's simple and works out. As long as you only wan't to check whether the script has finished or not it should be sufficent. If more complex data is to be remembered I'd go for version 3 in order to be able to 'memorize' the relevant data...
hth
K