Why in Chrome, when I hit "STOP" button when PHP script is executing, it does not stop execution? Even closing TAB doesn't stop it. ignore_user_abort() is false. Any ideas on how to force PHP to stop? Got large script, that makes some big files and executes for like 10 minutes...
There is no way for a user to stop a PHP script. Since PHP is run on the server when a page is called, you cannot stop it unless you have a condition in your script that will quit to your liking.
As I mentioned in a comment, this answer to a previous question has a lot of info on this.
But one takeaway that may be your issue: PHP may not know it's disconnected until it tries to send data and is refused, see this bit in the docs:
the next time your script tries to output something PHP will detect that the connection has been aborted
So depending on your required output, you may be able to send some sort of "heartbeat" data to the browser that will trigger the abort if the user disconnects. If it doesn't seem to be sending, try doing some explicit buffer-clearing with flush(). If you (or your framework) are using output buffering, you may have to work around it.
If you can't do that (if you're generating an output file or some such), you'll likely have to rearrange things, since there's no way PHP can know the connection is closed. My suggestion is to use a queueing system to offload the generation of things to a separate script that you can then cancel/kill manually - here's a good overview of queueing systems, I personally use beanstalkd with PHP - it's simple, easy, works splendidly, and has some good PHP libraries (I've used pheanstalk and davidpersson's beanstalk). Any time you're generating large files like that, you should probably be using a queueing system anyway.
Related
I'm looking for some ideas to do the following. I need a PHP script to perform certain action for quite a long time. This is an extension for a CMS and this can't be anything else but PHP. It also can't be a command line script because it should be used by common people that will have only the standard means of the CMS. One of the options is having a cron job (most simple hostings have it) that will trigger the script often so that instead of working for a long time it could perform the action step by step preserving its state from one launch to the next one. This is not perfect but I can't see of any other solutions. If the script will be redirecting to itself server will interrupt it. What other options can suit?
Thanks everyone in advance!
What you're talking about is a daemon or long running program that waits for calls by client programs, performs and action, provides a response then keeps on waiting for more calls.
You might be familiar w/ these in the form of Apache & MySQL ;) Anyway PHP is generally OK in this regard, it does have the ability to function over raw sockets as well as fork sub-processes to handle multiple requests simultaneously.
Having said that PHP daemons are a tool where YMMV. Some folks will say they work great, other folks like me will say they have issues w/ interprocess communication and leaking memory even amidst plethora unset() calls.
Anyway you likely won't be able to deploy a daemon of any type on a shared hosting environment. You'll need to get a better server package or stick with a Cron based solution.
Here's a link about writing a PHP daemon.
Also, one more note. Daemons do crash from time to time and therefore you may still need to store state about whats going on, just in case someone trips over the power cord to your shared server :)
I would also suggest that you think about making it a daemon but if not then you can simply use
set_time_limit(0);
ignore_user_abort(true);
at the top to tell it not to time out and not to get interrupted by anything. Then call it from the cron to start it every day or whatever. I have this on many long processing daily tasks and it works great for me. However, it won't be able to easily talk to the outside world (other scripts can't query it or anything -- if that is what you want look into php services) so once you get it running make sure it will stop and have it print its progress to a logfile.
I am looking for the PHP equivalent for VB doevents.
I have written a realtime analysis package in VB and used doevents to release to the operating system.
Doevents allows me to stay in memory and run continuously without filling up memory and allows me to respond to user input.
I have rewritten the package in PHP and I am looking for that same doevents feature.
If it doesn't exist I could reschedule myself and exit.
But I currently don't know how to do that and I think that would add a lot more overhead.
Thank you, gerardg
usleep is what you are looking for.. Delays program execution for the given number of micro seconds
http://php.net/manual/en/function.usleep.php
It's been almost 10 years since I last wrote anything in VB and as I recall, doevents() function allowed the application to yield to the processor during intensive processing (usually to allow other system events to fire - the most common being WM_PAINT so that your UI won't appear hung).
I don't think PHP has such functionality - your script will run as a single process and end (either when it's done or when it hits the default 30 second timeout).
If you are thinking in terms of threads (as most Windows programmers tend to do) and needing to spawn more than 1 instance of your script, perhaps you should take look at PHP's Process Control functions as a start.
I'm not entirely sure which aspects of doevents you're looking to emulate, so here's pretty much everything that could be useful for you.
You can use ob_implicit_flush(true) at the top of your script to enable implicit output buffer flushing. That means that whenever your script calls echo or print or whatever you use to display stuff, PHP will automatically send it all to the user's browser. You could also just use ob_flush() after each call to display something, which acts more like Application.DoEvents() in VB with regards to keeping your UI active, but must be called each time something is output.
Naturally if your script uses the output buffer already, you could build a copy of the buffer before flushing, with ob_get_contents().
If you need to allow the script to run for more time than usual, you can set a longer tiemout with set_time_limit($time). If you need more memory, and you have access to edit your .htaccess file, place the following code and edit the value:
php_value memory_limit 64M
That sets the memory limit to 64 megabytes.
For running multiple scripts at once, you can use pcntl_exec to start another one running.
If I am missing something important about DoEvents(), let me know and I will try to help you make it work.
PHP is designed for asynchronous on demand processing. However it can be forced to become a background task with a little hackery.
As PHP is running as a single thread you do not have to worry about letting the CPU do other things as that is already taken care of. If this was not the case then a web server would only be able to serve up one page at a time and all other requests would have to sit in a queue. You will need to write some sort of look that never expires until some detectable condition happens (like the "now please exit" message you set in the DB or something).
As pointed out by others you will need to set_time_limit($something); with perhaps usleep stopping the code from running "too fast" if it eats very much CPU each loop. However if you are also using a Database connection most of your script time is actually the script waiting for the Database (by far the biggest overhead for a script).
I have seen PHP worker threads created by using screen and detatching it to a background task. Other approaches also work so long as you do not have a session that will time out or exit (say when the web browser is closed). A cron that starts a script to check if the script is running every x mins or hours gives you automatic recovery from forced exists and/or system restarts.
TL;DR: doevents is "baked in" to PHP and you don't have to worry about it.
I'm trying to build a web interface for some python scripts. The thing is I have to use PHP (and not CGI) and some of the scripts I execute take quite some time to finish: 5-10 minutes. Is it possible for PHP to communicate with the scripts and display some sort of progress status? This should allow the user to use the webpage as the task runs and display some status in the meantime or just a message when it's done.
Currently using exec() and on completion I process the output. The server is running on a Windows machine, so pcntl_fork will not work.
LATER EDIT:
Using another php script to feed the main page information using ajax doesn't seem to work because the server kills it (it reaches max execution time, and I don't really want to increase this unless necessary)
I was thinking about socket based communication but I don't see how is this useful in my case (some hints, maybe?
Thank you
You want inter-process communication. Sockets are the first thing that comes to mind; you'd need to set up a socket to listen for a connection (on the same machine) in PHP and set up a socket to connect to the listening socket in Python and send it its status.
Have a look at this socket programming overview from the Python documentation and the Python socket module's documentation (especially the examples at the end). I'm sure PHP has similar resources.
Once you've got an more specific idea of what you want to build and need help, feel free to ask a new question on StackOverflow (if it isn't already answered).
I think you would have to use a meta refresh and maybe have the python write the status to a file and then have the php read from it.
You could use AJAX as well to make it more dynamic.
Also, probably shouldn't use exec()...that opens up a world of vulnerabilities.
You could use a queuing service like Gearman, with a client in PHP and a worker in Python or vice versa.
Someone has created an example setup here.
https://github.com/dbaltas/gearman-python-worker
Unfortunately my friend, I do believe you'll need to use Sockets as you requested. :( I have little experience working with them, but This Python Tutorial on Sockets/Network Programming may help you get the Python socket interaction you need. (Beau Martinez's links seem promising as well.)
You'd also need to get some PHP socket connections, too, so it can request the status.
Continuing on that, my thoughts would be that your Python script is likely going to run in a loop. Ergo, I'd put the "Check for a status request" check inside the beginning of a part of that loop. It'd reply one status, while a later loop inside that script would reply with an increased status.. etc.
Good luck!
Edit: I think that the file writing recommendation from Thomas Schultz is probably the easiest to implement. The only downside is waiting for the file to be opened-- You'll need to make sure your PHP and Python scripts don't hang or return failure without trying again.
I'm currently running a Linux based VPS, with 768MB of Ram.
I have an application which collects details of domains and then connect to a service via cURL to retrieve details of the pagerank of these domains.
When I run a check on about 50 domains, it takes the remote page about 3 mins to load with all the results, before the script can parse the details and return it to my script. This causes a problem as nothing else seems to function until the script has finished executing, so users on the site will just get a timer / 'ball of death' while waiting for pages to load.
**(The remote page retrieves the domain details and updates the page by AJAX, but the curl request doesnt (rightfully) return the page until loading is complete.
Can anyone tell me if I'm doing anything obviously wrong, or if there is a better way of doing it. (There can be anything between 10 and 10,000 domains queued, so I need a process that can run in the background without affecting the rest of the site)
Thanks
A more sensible approach would be to "batch process" the domain data via the use of a cron triggered PHP cli script.
As such, once you'd inserted the relevant domains into a database table with a "processed" flag set as false, the background script would then:
Scan the database for domains that aren't marked as processed.
Carry out the CURL lookup, etc.
Update the database record accordingly and mark it as processed.
...
To ensure no overlap with an existing executing batch processing script, you should only invoke the php script every five minutes from cron and (within the PHP script itself) check how long the script has been running at the start of the "scan" stage and exit if its been running for four minutes or longer. (You might want to adjust these figures, but hopefully you can see where I'm going with this.)
By using this approach, you'll be able to leave the background script running indefinitely (as it's invoked via cron, it'll automatically start after reboots, etc.) and simply add domains to the database/review the results of processing, etc. via a separate web front end.
This isn't the ideal solution, but if you need to trigger this process based on a user request, you can add the following at the end of your script.
set_time_limit(0);
flush();
This will allow the PHP script to continue running, but it will return output to the user. But seriously, you should use batch processing. It will give you much more control over what's going on.
Firstly I'm sorry but Im an idiot! :)
I've loaded the site in another browser (FF) and it loads fine.
It seems Chrome puts some sort of lock on a domain when it's waiting for a server response, and I was testing the script manually through a browser.
Thanks for all your help and sorry for wasting your time.
CJ
While I agree with others that you should consider processing these tasks outside of your webserver, in a more controlled manner, I'll offer an explanation for the "server standstill".
If you're using native php sessions, php uses an exclusive locking scheme so only a single php process can deal with a given session id at a time. Having a long running php script which uses sessions can certainly cause this.
You can search for combinations of terms like:
php session concurrency lock session_write_close()
I'm sure its been discussed many times here. I'm too lazy to search for you. Maybe someone else will come along and make an answer with bulleted lists and pretty hyperlinks in exchange for stackoverflow reputation :) But not me :)
good luck.
I'm not sure how your code is structured but you could try using sleep(). That's what I use when batch processing.
Anyone know how to close the connection (besides just flush()?), but keep executing some code afterwards.
I don't want the client to see the long process that may occur after the page is done.
You might want to look at pcntl_fork() -- it allows you to fork your current script and run it in a separate thread.
I used it in a project where a user uploaded a file and then the script performed various operations on it, including communicating with a third-party server, which could take a long time. After the initial upload, the script forked and displayed the next page to the user, and the parent killed itself off. The child then continued executing, and was queried by the returned page for its status using AJAX. it made the application much more responsive, and the user got feedback as to the status while it was executing.
This link has more on how to use it:
Thorough look at PHP's pcntl_fork() (Apr 2007; by Frans-Jan van Steenbeek)
If you can't use pcntl_fork, you can always fall back to returning a page quickly that fires an AJAX request to execute more items from a queue.
mvds reminds the following (which can apply in a specific server configuration): Don't fork the entire apache webserver, but start a separate process instead. Let that process fork off a child which lives on. Look for proc_open to get full fd interaction between your php script and the process.
I don't want the client to see the
long process that may occur after the
page is done.
sadly, the page isn't done until after the long process has finished - hence what you ask for is impossible (to implement in the way you infer) I'm afraid.
The key here, pointed to by Jhong's answer and inversely suggested by animusen's comment, is that the whole point of what we do with HTTP as web developers is to respond to a request as quickly as possible /end - that's it, so if you're doing anything else, then it points to some design decision that could perhaps have been a little better :)
Typically, you take the additional task you are doing after returning the 'page' and hand it over to some other process, normally that means placing the task in a job queue and having a cli daemon or a cron job pick it up and do what's needed.
The exact solution is specific to what you're doing, and the answer to a different (set of) questions; but for this one it comes down to: no you can't close the connection, and one would advise you look at refactoring the long running process out of that script / page.
Take a look at PHP's ignore_user_abort-setting. You can set it using the ignore_user_abort() function.
An example of (optional) use has been given (and has been reported working by the OP) in the following duplicate question:
close a connection early (Sep 2008)
It basically gives reference to user-notes in the PHP manual. A central one is
Connection Handling user-note #71172 (Nov 2006)
which is also the base for the following two I'd like to suggest you to look into:
Connection Handling user-note #89177 (Feb 2009)
Connection Handling user-note #93441 (Sep 2009)
Don't fork the entire apache webserver, but start a separate process instead. Let that process fork off a child which lives on. Look for proc_open to get full fd interaction between your php script and the process.
We solved this issue by inserting the work that needs to be done into a job queue, and then have a cron-script pick up the backend jobs regularly. Probably not exactly what you need, but it works very well for data-intensive processes.
(you could also use Zend Server's job queue, if you've got a wad of cash and want a tried-and-tested solution)