I'm probably overthinking this, but wanted others' input. I run a particular PHP script around 1 million times per day in the background to process incoming SNMP traps. I revise and test my PHP script from the command line and have a lot of echo() functions throughout the script in order to check the data as it's being processed when I test the script. I am wondering if I should wrap these echo() functions into a quick checksum in order to verify this script is actually being run from the command line before I echo to the screen in an effort to save cpu cycles when this script is being run in the background, or if the cpu cycles vs the checksum creates a negligible situation and it's worth just ignoring the few extra clock cycles I might be wasting by calling echo() when the process is running in the background. Thanks in advance!
Related
I have some limitations with my host and my scripts can't run longer than 2 or 3 seconds. But the time it will take to finish will certainly increase as the database gets larger.
So I thought about making the script stop what it is doing and call itself after 2 seconds, for example.
Firstly I tried using cURL and then I made some attempts with wget. But there is always a problem with waiting for the response and timeouts (with cURL, for example, I just need to ping the script, not wait for a response) or permissions with the server (functions that we use to run wget such as exec seems to be disabled on my server, or something like that).
What do you think is the best idea to make a PHP script ping/call itself?
On Unix/LInux systems I would personally recommend to schedule CRON JOBS to keep running the scripts at certain intervals
May be this SO Link will help you
Php scripts generally don't call other php scripts. It is possible to spawn a background process as illustrated here, but I don't think that's what you're after. If, so you'd be better off using cron as was discussed above.
Calling a function every X amount of seconds with the same script is certainly possible, but this does the opposite of what you want since it would only extend the run time of the script in question.
What you seem to be asking is, contrary to your comment, somewhat paradoxical. A process that calls method() every so often is still a long running process and is subject to the same restrictions as any other process on the server, regardless of the fact that it may be sitting idle for short intervals.
As far as I can see your options are:
Extend the php max_execution_time directive, or have your sysadmin do so if they are willing
Revise your script so that it completes within the time limit
Move to a new server
I have a PHP script that loads upwards of 60 million strings from a SQL call, does some processing on them, and writes them to a file. Yeah, sounds horrid, but it does the job it needs to do.
The script takes about an hour to run (mostly loading from the SQL call) which is fine.
But when it's done, it takes upwards of another hour to exit. I can only presume it's freeing up all of the memory consumed.
I tried taking out the calls to free the resources from the SQL call to see if I could just let the script end and the process end, and it makes no difference.
Am I missing something obvious? Shouldn't the script be able to exit and let the OS kill the process? The memory comes back if I kill the process by hand (sudo kill) after I know the script is done. Is there a way to make this happen and say, "look, I'm all done, just nuke yourself?"
I have a php script I need to run every 5 seconds (run, wait until it's done, wait 5 seconds, run again)
I have two ways of doing it. either have an infinte loop in the script with a sleep function that would look like that:
while (1)
{
do_stuff();
sleep 5;
}
or have bash script that will run the script and wait for 5 seconds. It would look like that
while [ 1 ]; do
php script.php &
wait
sleep 5
done
I was wondering what is the most efficient way to do it.
The php script i am running is a codeigniter controller that does a lot of database calls..
If you're doing a lot of DB calls, then do the sleep within php. That way you're not paying the php startup penalty, the connect-to-the-db penalty, etc... that'l be incurred if you're sleeping in bash.
When you do the bash loop, you'll be starting/running/exiting your script each iteration, and that overhead adds up quickly on long-running scripts.
On the other hand, at least you'll start with a "clean" environment each time, and won't have to worry about memory leaks within your script. You may want to combine both, so that you loop/sleep within PHP (say) 100 times, then exit and loop around in Bash.
Typically all PHP scripts have a timeout - unless you are running from CLI - so your first method will work for 30 to 60 seconds (depending on your server configuration) and then will be forcibly terminated.
I'd tend to suggest either a command-line option or even (and this depends on how regularly you want the code to run) a cron command to execute it.
This might depend on what else the system is doing. I would think that the BASH solution could be better handled by the system to free up resources for other applications.
The number of database calls seems like it would be less important, because you need to make those either way. The cost of running the script from BASH is the time to build up and strip down the CodeIgniter framework. But CodeIgniter is already designed to do this in a normal web application and performs fast enough.
I have a PHP-script running on my server via a cronjob. The job runs every minute. In the php script i have a loop that executes, then waits one sevond and loops again. Essentially creating a script to run once every second.
Now I'm wondering, if i make the cronjob run only once per hour and have the script still loop for an entire hour or possible an entire day.. Would this have any impact on the servers cpu and or memory and if so, will it be positive or negative?
I spot a design flaw.
You can always have a PHP script permanently running in a loop performing whatever functionality you require, without dependency upon a webserver or clients.
You are obviously checking something with this script, any incites into what? There may be better solutions for you. For example if it is a database consider SQL triggers.
In my opinion it would have a negative impact. since the scripts keeps using resources.
cron is called on a time based scale that is already running on the server.
But cronjob can only run once a minute at most.
Another thing is if the script times out, fails, crashes for whatever reason you end up with not running the script for at max one hour. Would have a positive impact on server load but not what you're looking for i guess? :)
maybe run it every 2 or even 5 minutes to spare server load?
OR maybe change the script so it does not wait but just executes once and calling it from cron job. should have a positive impact on server load.
I think you should change script logic if it is possible.
If tasks your script executes are not periodic but are triggered by some events, the you can use some Message Queue (like Gearman).
Otherwise your solution is OK. Memory leaks can occurs, but in new PHP versions (5.3.x) Garbage Collector is pretty good. Some extensions can lead to memory leaks. Or your application design can lead to hungry memory usage (like Doctrine ORM loaded objects cache).
But you can control script memory usage by tools like monit and restart your script when mempry limit reaches some point or start script again when your script unexpectedly shuts down.
I am looking for the PHP equivalent for VB doevents.
I have written a realtime analysis package in VB and used doevents to release to the operating system.
Doevents allows me to stay in memory and run continuously without filling up memory and allows me to respond to user input.
I have rewritten the package in PHP and I am looking for that same doevents feature.
If it doesn't exist I could reschedule myself and exit.
But I currently don't know how to do that and I think that would add a lot more overhead.
Thank you, gerardg
usleep is what you are looking for.. Delays program execution for the given number of micro seconds
http://php.net/manual/en/function.usleep.php
It's been almost 10 years since I last wrote anything in VB and as I recall, doevents() function allowed the application to yield to the processor during intensive processing (usually to allow other system events to fire - the most common being WM_PAINT so that your UI won't appear hung).
I don't think PHP has such functionality - your script will run as a single process and end (either when it's done or when it hits the default 30 second timeout).
If you are thinking in terms of threads (as most Windows programmers tend to do) and needing to spawn more than 1 instance of your script, perhaps you should take look at PHP's Process Control functions as a start.
I'm not entirely sure which aspects of doevents you're looking to emulate, so here's pretty much everything that could be useful for you.
You can use ob_implicit_flush(true) at the top of your script to enable implicit output buffer flushing. That means that whenever your script calls echo or print or whatever you use to display stuff, PHP will automatically send it all to the user's browser. You could also just use ob_flush() after each call to display something, which acts more like Application.DoEvents() in VB with regards to keeping your UI active, but must be called each time something is output.
Naturally if your script uses the output buffer already, you could build a copy of the buffer before flushing, with ob_get_contents().
If you need to allow the script to run for more time than usual, you can set a longer tiemout with set_time_limit($time). If you need more memory, and you have access to edit your .htaccess file, place the following code and edit the value:
php_value memory_limit 64M
That sets the memory limit to 64 megabytes.
For running multiple scripts at once, you can use pcntl_exec to start another one running.
If I am missing something important about DoEvents(), let me know and I will try to help you make it work.
PHP is designed for asynchronous on demand processing. However it can be forced to become a background task with a little hackery.
As PHP is running as a single thread you do not have to worry about letting the CPU do other things as that is already taken care of. If this was not the case then a web server would only be able to serve up one page at a time and all other requests would have to sit in a queue. You will need to write some sort of look that never expires until some detectable condition happens (like the "now please exit" message you set in the DB or something).
As pointed out by others you will need to set_time_limit($something); with perhaps usleep stopping the code from running "too fast" if it eats very much CPU each loop. However if you are also using a Database connection most of your script time is actually the script waiting for the Database (by far the biggest overhead for a script).
I have seen PHP worker threads created by using screen and detatching it to a background task. Other approaches also work so long as you do not have a session that will time out or exit (say when the web browser is closed). A cron that starts a script to check if the script is running every x mins or hours gives you automatic recovery from forced exists and/or system restarts.
TL;DR: doevents is "baked in" to PHP and you don't have to worry about it.