I have a function to import data from excel to database, I make this function to run on server so this function doesn't need to interact with client anymore, the client web browser just need to upload the excel file to server, after that, the task will be run just on server so if the browser closed by client, the function still run on server, i've got this, the problem is, when the browser is leave open by client, the browser will loading as long as the function still active.How can i made the browser not wait respond from server so the browser will not loading while the process is run on server.Please help me.
Use a message queue to offload the task of processing the file from the web server to another daemon running separately.
You can take the cheap and easy route of execing a process with & in the command line, causing it to be backgrounded. However, that gives you little control / status.
The right way to go about it IMO is to queue up these long-running tasks in a database, with some status info associated with them. Then have a dedicated process which runs separate from your webserver, checking the database for tasks, and performs them, updating the database with success/failure status.
Look into using a queue such as Mseven's Queue Plugin:
Msevens Queue Plugin
Or, if you want a more daemon based job, look into Beanstalkd. The queue plugin by mseven is pretty self explanatry though. Stay away from forking processes using &, it can get out of control.
Related
this time I come with a question that I hope you can guide me to solve.
I have created a PHP script that allows loading a CSV file with a large amount of data (to load it I use the AJAX request). This script extracts the data from the file, then checks that this data is not already stored in the database, makes use of another script to obtain information of each data that is extracted from the file and finally saves the data that has passed successfully. all that validation process in a BD table.
It is a process that can last a few seconds or many minutes, because there are files that I can upload that contain more than 100 thousand data, so I would not like to leave the browser open all the time the process lasts.
What I want to know is how I could leave this process running internally on the server when I close the browser. Something like putting it in queue and let it continue running when I close my browser.
Once I reopen the browser and open the page of the script that shows me how the process is currently going. The idea is that the data processing is not interrupted when I close my browser.
Any suggestions or examples you could give me to achieve this?
Based on your description, I think you'd better run a dedicated daemon (either a 3rd party one or one written by yourself) yourself which does the background stuff.
The rationale behind why I don't think it right to do that in your PHP code is:
If you fork it from your server code, you have to install something else and since it is a folk, that process you are gonna spawn will inherit some data not useful at all from the parent process
With a dedicated daemon, it's easier for you to track the status of each job and more importantly, not a bunch of processes will be spawned if you just fork a new process for each job in the server code.
I have a website, created using PHP and running on Apache. I want a subscriber to be able to log in and start a process on the server. They can then log out or close the browser without interrupting the process. Later they can log in and see the progress or see the results of the original process. What is the best way to accomplish this (having the process run until completion, after the browser is closed)?
Just looking for someone to point me in the right direction. A few people mentioned Gearman.
Gearman would be an ideal candidate, and I would use it for exactly the purpose you describe. It has everything you need out of the box to meet your requirements ("background" a long running, CPU-bound process to another machine, e.g. video encoding).
There is a Gearman PHP library, but you can write your worker code in a different language if it's better suited to doing the work.
For reporting progress information, I recommend having the worker write to Redis or Memcached - some kind of temporary storage that your web server can also access.
Check out the simple PHP example on the Gearman site. For learning, I recommend setting up a lab environment that contains 3 separate VM's, one for your web server (the client), one for the Gearman job queue (the server) and another for processing jobs (the workers).
We do a lot of processing for images and a lot of the time this processing kills all our CPU and causes our site to crash. What we want to do is put the image processing on another server, so that we can scale that server as nessicary and not have our current server crash.
I'm wondering how to go about this though. Our current process is:
1) User's make AJAX request to our Image Processing Script.
2) We construct a string based on the user's input. This string contains the commands to perform an ImageMagick process.
3) We run the string through PHP's system() command.
4) We then send headers to the page and use PHP's imagecreatefrompng() functions on the file to output the image to the user.
So what I'd like to know, what's the best way to now transfer the ImageMagick process. I thought of remote connecting to the other server via SSH, but I'm sure there is a limit on the number of connections that can be made via SSH. We have 100s of users online at a time so we need to be able to do that many connections at a time.
Anyone with any ideas on how best to transfer our image processing to another server would be greatly welcomed.
Thanks!
SSH would not be an appropriate protocol to distribute a work request to another server. A popular trend is to leverage a messaging queue to dispatch tasks to "worker" nodes. The implementation can very greatly depending on design, needs, and resource constraints. Here's a quick bare-bone outline...
A web server receives a new image item.
Writes image to CDN, or network mount &etc.
Publishes a task to a messaging queue, like RabbitMQ
A worker node listens for new tasks.
Consumes and performs request.
Writes result output next to source on CDN
Notifies tasks complete by either updating a record in DB, or publish back to MQ.
Checkout RabbitMQ/PHP "Hello World", and "Work Queues" articles for detailed examples.
I have a PHP script that processes my email subscriptions.
It does something like:
foreach email to be sent:
mailer->send-email
print "Email sent to whoever."
I'm now encountering rate-limiting by my web host. The mailing library has a built in throttler that will sleep to ensure I stay under the rate. However, this could result in the web page taken multiple hours to actually load.
Will the client side browser ever give up on the page loading? Any suggested better solutions to this?
Why is this being done on a webpage load? This should be an off-line back-end process which is scheduled to run. (Look into cron for scheduling tasks.)
Any long running process should be delegated to a back-end service to handle that process. Application interfaces (such as a web page) should respond back to the user as quickly as possible instead of forcing the user to wait (for upwards of an hour?) for a response.
The application can track progress, usually by means of some shared data source (a simple database, for example), of the back-end process and present that progress to the user. That's fine. But the process itself should happen outside of the application.
For example, at a high level...
Have a PHP script scheduled to run to process the emails.
When the script starts, save a record to a database indicating that it's started.
Each time the script reaches a milestone of some kind, update the database record to indicate this.
When the script finishes, update the database record to indicate this.
Have a web application which checks for that database record and shows the user the current status of the back-end process.
You may not care, but even if you coerce this script into staying alive, you shouldn't purposely run a long running script through the webserver. Webserver's use resource heavy threads or processes to run your script, and they have a finite amount of them available to server web requests. A long running script basically takes one of them out of the pool of processes that can be used to server web visitors.
Instead, use a cron job which executes the php binary directly. Specifically, do not use wget or lynx or any other web browser like program as part of the cron job, because those methods run the script through the webserver. The cron command should include something like
php /full/path/to/the/script.php
I have a website written in PHP (CakePHP) where certain resource intensive tasks are handled by a background process. This is done through the Beanstalkd message queue. I need some way to retrieve the status of that background process so I can monitor it with Monit.
The background process is a CakePHP Shell (just a PHP CLI script) that communicates with Beanstalkd. It simply does a reserve() on Benastalkd and waits for a new message. When it gets a message, it processes it. I want some way of monitoring this process with Monit so that it can restart the background process if something has gone wrong.
What I have been thinking about so far is writing a PHP CLI script that drops a message in Beanstalkd. The background process picks up the message and somehow communicates it's internal status back to the CLI script. But how? Sockets? Shared memory? Some other IPC method?
Or am I perhaps being too complicated here and is there a much easier way to monitor such a process with Monit?
Thanks in advance!
Here's what I ended up doing in the end.
The CLI script connects to beanstalkd, creates a new queue (tube) and starts watching it. Then it drops a highest priority message in the queue that the background daemon is watching. That message contains the name of the new queue that the CLI script is monitoring.
The background process receives this message almost immediately (because it is highest priority), generates a status message and puts it in the queue that the CLI script is watching. The CLI script receives it and then closes the queue.
When the CLI script does not get a response in 30 seconds it will exit with an error indicating the background daemon is (most likely) hung.
I tied all this into Monit. Monit can now check that the background daemon is running (via the pidfile and process list) and verify that it is actually still processing messages (by using the CLI tool to test that it responds to status requests)
There probably is a plugin to Monit or Nagios to connect, run the stats and return if there are 'too many'. There isn't a 'protocol' written already for that, but t doesn't appear to be exceeding difficult to modify an existing text-based one (like nntp, or smtp) to do what you want. It does mean writing it in C though, by the looks of it.
From a CLI-PHP script, I would go about it through one (or both) of two different methods.
1/ drop a (low-ish) priority message into the queue, and make sure it comes back within a few seconds. Putting it into a dedicated queue and making sure there's nothing there before you put it in there would be a good addition as well.
2/ perform a 'stats' and see how many are waiting: 'current-jobs-ready'.
To get the information back to a website (either way), you can write to a file, or into something like Memcached which gts read and acted upon.