Is it possible to do a non-stopping program in PHP? For example, using 2% of processor and some memory all of the time. If it's not possible, can you tell me what direction I should be looking for c++ non-stopping program (on UNIX server) and how to pass variables from PHP to c++.
EDIT:
First: I have max execution time which is stopping it (but I need it for other scripts in problem of bugs).
Second: I don't want to burn server so while true it's not the best idea (it have to have some max memory and processor usage).
You can use CLI
Create your php file and run it on the command line, it won't stop unless the code ends
You can limit the memory usage: php -d memory_limit=128M my_script.php this is changing php.ini directives so you can edit on your own instead of defining it every time
you can do something like this
// run-forever.php
while(true) {
// your executive code
usleep(500) // time in us - something like yield to not ocupy the CPU
}
and then you can run: php run-forever.php
btw if you tend to use web based php you'll have to define set_time_limit(0); before a while loop.
Related
I have read the other questions on SO with a similar title, but that's not what this question is about. I know HOW to execute a PHP script from another PHP script. The problem is, when I do so, it uses far too much CPU. I would like to know how to reduce this.
I have a simple front-controller-like script called index.php. It processes GET requests from a client and depending on the "action" parameter passed, it sends the request to the appropriate file to handle it. For example, this is a client request:
xhttp.open("GET", serverURL + "?action=doSomething" + "&userID=" + user.ID + "&time=" + lastServerTime, true);
index.php has an array that maps the "action" parameter to the appropriate file:
exec('php ' . $url_map[$action] . ' "' . $parameter1 . '"' . ' "' . $parameter2 . '" 2>&1', $output, $return_value);
For testing purposes, I have created a PHP script that does nothing except measure CPU utilisation and dump it to a log file:
<?php
function varDumpToFile($parameter1) {
$file = 'log.txt';
$dump = $parameter1;
$output = print_r($dump, true);
file_put_contents($file, $output, FILE_APPEND | LOCK_EX);
}
varDumpToFile(`ps -eo pcpu,pid,user,args --no-headers| sort -t. -nk1,2 -k4,4 -r |head -n 5`);
?>
This produces a log file that looks like this:
9.0 3123052 user /opt/cpanel/ea-php56/root/usr/bin/php cputest.php 10 147424 1537625595
Clearly, a PHP script shouldn't take 9% of CPU to execute. For comparison, I've run the same script directly accessing it via a GET request:
0.1 3186198 user lsphp:ic_html/dev/php/cputest.php
0.1% is more like it. But why does calling this PHP script from another PHP script use so much CPU? Is it because I have to execute a "new instance" of PHP when I exec PHP, which has a lot of overhead? If so, is there a way to exec a PHP script using an "already running" instance of PHP? Or is there another way of doing this?
I always say "when in doubt, look at PHP source code". In here, for instance. While doing exec, you have to fork the process, create a new stream, read from the input buffer, etc.
And also, while PHP is a compiled language, for the newly forked process, you must run the opcode compiler to generate opcodes (instructions similar to Java bytecode) and then execute those. You can read all about it here. In the end you run the compiler twice, for each fork separately.
Is it worth 9% of your CPU? I have no idea. Maybe. Maybe not. Who knows.
"Better solution"? Upgrade to latest version of PHP. PHP 5.6 is not supported anymore and security updates will cease in 3 months. Even better solution - keep a normal object-oriented and maintainable code without using exec. IMO, it's okay to play around with exec like you are. But if it's your production code, I pray for the souls of those, who would maintain your code after you.
Whatever which way you run your application be mod_php or fpm, they rely on having worker processes ready to manage your request. Process management is built in: they will do their best to keep as many workers idle as you specify and reuse them to avoid exactly this problem, having to fork processes at the least desirable moment.
Not only there's overhead on executing new processes, but the execution environment will be completely different too. If you look into your php configuration there will be several php.ini files, one for each specific environment. This means that one environment could have different modules enabled or different configuration outright. It's not uncommon to have cli scripts max_execution_time or memory_limit set to unlimited. This can affect resource usage on your server, but it's also a pain to maintain.
Also, since your scripts will be running in a brand new process in a different execution environment, this won't have access to some variables (like $_SERVER or $_POST) or capabilities like sending headers.
And there's this thing called shared memory. As #Alex mentions, scripts have to be compiled. If you have opcode cache enabled (which you should) the bytecode gets cached when compiled and this compilation process can be skipped if the resulting bytecode it's there already. For this to work you need to have a persistent running process that can keep this memory around. If you are creating a new process it can't access this shared area and has to do the compilation all by itself.
I have developed an online java code editor at http://joomla5.guru99.com/try-java-editor.html I am invoking javac using shell_exec function of php and executing java code.
$result = shell_exec('javac' .$soucejavafile. '2>&1');
and running classfile by
$result= shell_exec('java' .$classfile. '2>&1');
Now for security purpose, I want to set time limit for this java code execution. For example, java code execution should be stopped after some amount time and all it's processes must be killed
I have tried
ulimit and ps commands but couldn't able to achieve this.
Please assist me in the correct direction and please help me to make this possible.
Regards.
You can do it in 3 ways:
1) Call pcntl_fork in PHP and check the timeout in parent process. Kill it if it exceeds using linux kill command.
2) Include timeout in a bash script that you will invoke using shell_exec, see this example:
http://www.bashcookbook.com/bashinfo/source/bash-4.0/examples/scripts/timeout3
3) Use proc_open / proc_terminate functions
Personally I would go with number 3, it's the cleanest. If you need quick and dirty, use number 2.
READ FURTHER BELOW at CLI, FOR THE CLI QUESTION, WHICH JUST ADDED TO THE CONVERSATION! THX!
I have written a script which processes an xml file of around 160'000 entries with 48.1MB and a text file of 150'000 entries with 31.1MB, including some directory searches for external files, heavy interlinking and recursive checks and the result formatted and all saved into html files.
Surely, I did review the program couple times and ended up with the most efficient code I could think of. This is a local program and the generator doesn't need to run regularly. One could argue that I should use an other language than PHP, but PHP with simplexml, etc. just works the best for me and for this purpose. Also a set_time_limit('70000') doesn't bother me.
Although, here my question, is it possible to make the apache2 on my linux system, use my 4 CPU cores running my PHP script?
Even if I split the process and make several request's simultaneously, the CPU usage can't go above 1 CPU at a time.
I googled this topic but couldn't find a solution, so I may have to just run it over night, even though, I would appreciate some help to boost that thing!!!
ADDED INFO - And here a picture of my processes:
CLI:
I need to call my index.php in the linux terminal to execute. But I also wanna send four post variables ($_POST['example']) to the script. On top of that, I am looking for having my echos presented in some output file. Could anyone help quickly with the terminal command and the php command to track those 4 post variables inside:
if (PHP_SAPI === 'cli')
{
// ...
}
? ...sorry but this is my first php-cli interaction. Thx!
No, a single PHP script will never use multiple threads and thus always run on single core.
Depending on how much the things you do depend on each other you couldn't easily split them on multiple threads anyway.
EDIT: Author's response
This is not a solution but a nice workaround. I clone my virtual machine with the linux/apache2 install to kick in the same process but different parts of the file/process on different vm's, which lets the host system apply one core for each virtual system, that way I could break down the process time by around the factor 4. Thanks for your posts!
===============
If it's local, and you want to run it every now and then, you should probably just invoke it from a cron job. That way, you can spawn a process for each task you are doing. If you really do want to use PHP for it, you can even invoke PHP to do it from the cron line.
None the less, it sounds like you're doing an inherently single-threaded process anyway, and if you want it faster, should probably use something that isn't PHP for this.
Maybe you can use Spork! It's a php lib allowing you to fork the php process into multiple ones.
<?php
use Spork\Deferred\DeferredFactory;
use Spork\ProcessManager;
$manager = new ProcessManager(new DeferredFactory());
$manager->fork(function() {
// do something in another process!
})->then(function($output, $status) {
// do something in the parent process when it's done!
});
https://github.com/kriswallsmith/spork
SOLUTION, THX TO ThiefMaster and Zebediah49 recommending cli and my friend who supported me with the links: http://ch.php.net/manual/en/reserved.variables.argv.php / http://ch.php.net/manual/en/function.getopt.php
and here how I call the php through cli:
//whenRunFromCLI
//callCLI
//php index.php './data/xyfullFile1.xml' './data/xxfullFile2.utf' 0 60000
//php index.php './data/xyfullFile1.xml' './data/xxfullFile2.utf' 60000 120000
//php index.php './data/xyfullFile1.xml' './data/xxfullFile2.utf' 120000 all
if (PHP_SAPI === 'cli'){
$_POST['xml'] = $argv[1];
$_POST['example'] = $argv[2];
#$_POST['rangeFrom'] = $argv[3];
#$_POST['rangeTo'] = $argv[4];
}
and the Result of calling the php file in three terminals:
I know, I must give some more RAM to my virtual machine, lucky that I still have 8GB spare ;-)
Cheers and peace!
I created a script that runs in the background using the ignore_user_abort() function. However, I was foolish enough not to insert any sort of code to make the script stop and now it is sending e-mails every 30 seconds...
Is there any way to stop the script? I am in a shared hosting, so I don't have access to the command prompt, and I don't know the PID.
Is there any way to stop the script? I am in a shared hosting, so I don't have access to the command prompt, and I don't know the PID.
Then no.
But are you sure you don't have any shell access? Even via PHP? If you do, you could try....
<?php
print `ps -ef | grep php`;
...and if you can identify the process from that then....
<?php
$pid=12345; // for example.
print `kill -9 $pid`;
And even if you don't have access to run shell commands, you may be able to find the pid in /proc (on a linux system) and terminate it using the POSIX extension....
<?php
$ps=glob('/proc/[0-9]*');
foreach ($ps as $p) {
if (is_dir($p) && is_writeable($p)) {
print "proc= " . basename($p);
$cmd=file_get_contents($p . '/cmdline');
print " / " . file_get_contents($p . '/cmdline');
if (preg_match('/(php).*(myscript.php)/',$cmd)) {
posix_kill(basename($p), SIGKILL);
print " xxxxx....";
break;
}
print "\n";
}
}
I came to this thread Yesterday! I by mistake had a infinite loop in a page which was not supposed to be visited and that increased my I/O to 100 and CPU usage to 100 I/O was because of some php errors and it was getting logged and log file size was increasing beyond anyone can think.
None of the above trick worked on my shared hosting.
MY SOLUTION
In cPanel, go to PHP Version (except that of current)
Select any PHP Version for time being.
And then Apply Changes.
REASON WHY IT WORKED
The script which had infinite loop with some php errors was a process so I just needed to kill it, changing php version reinforce restart of services like php and Apache, and as restart was involved earlier processes were killed, and I was relaxed as I/O and CPU usage stabilized. Also, I fixed that bug before hand changing the php version :)
how did you deploy the script? surely you can just remove it (if that's an acceptable option). otherwise modify it and insert some logic to only allow it to send a mail once every n mins/hours/days based on the server time?
re. stopping the script from executing (or rather the system trying to execute it) how did you schedule it for execution? is it some type of gui to a crontab or something? can you not just undo what you did there (seeing as you have no access to the command line/terminal)?
rob ganly
Simply .
Call the support, get it cancelled.
Next time, don't execute something you can't control.
I have a PHP website and I would like to execute a very long Python script in background (300 MB memory and 100 seconds). The process communication is done via database: when the Python script finishes its job, it updates a field in database and then the website renders some graphics, based on the results of the Python script.
I can execute "manually" the Python script from bash (any current directory) and it works. I would like to integrate it in PHP and I tried the function shell_exec:
shell_exec("python /full/path/to/my/script") but it's not working (I don't see any output)
Do you have any ideas or suggestions? It worths to mention that the python script is a wrapper over other polyglot tools (Java mixed with C++).
Thanks!
shell_exec returns a string, if you run it alone it won't produce any output, so you can write:
$output = shell_exec(...);
print $output;
First off set_time_limit(0); will make your script run for ever so timeout shouldn't be an issue. Second any *exec call in PHP does NOT use the PATH by default (might depend on configuration), so your script will exit without giving any info on the problem, and it quite often ends up being that it can't find the program, in this case python. So change it to:
shell_exec("/full/path/to/python /full/path/to/my/script");
If your python script is running on it's own without problems, then it's very likely this is the problem. As for the memory, I'm pretty sure PHP won't use the same memory python is using. So if it's using 300MB PHP should stay at default (say 1MB) and just wait for the end of shell_exec.
A proplem could be that your script takes longer than the server waiting time definied for a request (can be set in the php.ini or httpd.conf).
Another issue could be that the servers account does not have the right to execute or access code or files needed for your script to run.
Found this before and helped me solve my background execution problem:
function background_exec($command)
{
if(substr(php_uname(), 0, 7) == 'Windows')
{
pclose(popen('start "background_exec" ' . $command, 'r'));
}
else
{
exec($command . ' > /dev/null &');
}
}
Source:
http://www.warpturn.com/execute-a-background-process-on-windows-and-linux-with-php/
Thanks for your answers, but none of them worked :(. I decided to implement in a dirty way, using busy waiting, instead of triggering an event when a record is inserted.
I wrote a backup process that runs forever and at each iteration checks if there is something new in database. When it finds a record, it executes the script and everything is fine. The idea is that I launch the backup process from the shell.
I found that the issue when I tried this was the simple fact that I did not compile the source on the server I was running it on. By compiling on your local machine and then uploading to your server, it will be corrupted in some way. shell_exec() should work by compiling the source you are trying to run on the same server your are running the script.