Can I create a server php variable? - php

I want to have my own variable that would be (most likely an array) storing what my php application is up to right now.
The application can trigger few processes that are in background (like downloading files) and I want to have a list what is being currently processed.
For example
if php calls exec() that will be downloading for 15mins
and then another download starts
and another download starts
then if I access my application I want to be able to see that 3 downloads are in process. If none of them finished yet.
Can do that? Only in memory, not storing anything on the disk?
I thought that the solution would be a some kind of server variable.

PHP doesn't have knowledge of previous processes. As soon has a php process is finished everything it knows about itself goes with it.
I can think of two options. Write knowledge about spawned processes to a file or database and use it to sync all your php request, (store the PID of each spawned process)
Or
Create an Daemon. The people behind PHP have worked hard to clean up PHP memory handling and such to make this more feasible. Take a look at their PEAR package - http://pear.php.net/package/System_Daemon
Off the top of my head, a quick architecture would compose of 3 peices
Part A) The web app that will take in request for downloads, and report back the progress of all request
Part B) You daemon, which accepts requests for downloads, spawns process, and will report back status of all spawned reqeust
Part C) The spawn request that will perform the download you need.

Anyone for shared memory?
Obviously you would have to have some sort of daemon, but you could use the inbuilt semaphore functions to easily have contact between each of the scripts. You need to be careful though because sometimes if you're not closing the memory block properly, you could risk ending up with no blocks left.

You can't store your own variables in $_SERVER. The best method would be to store your data in a database where and query/update it as required.

Related

How to leave the execution of a very heavy task in queue (running in the background) when closing the browser?

this time I come with a question that I hope you can guide me to solve.
I have created a PHP script that allows loading a CSV file with a large amount of data (to load it I use the AJAX request). This script extracts the data from the file, then checks that this data is not already stored in the database, makes use of another script to obtain information of each data that is extracted from the file and finally saves the data that has passed successfully. all that validation process in a BD table.
It is a process that can last a few seconds or many minutes, because there are files that I can upload that contain more than 100 thousand data, so I would not like to leave the browser open all the time the process lasts.
What I want to know is how I could leave this process running internally on the server when I close the browser. Something like putting it in queue and let it continue running when I close my browser.
Once I reopen the browser and open the page of the script that shows me how the process is currently going. The idea is that the data processing is not interrupted when I close my browser.
Any suggestions or examples you could give me to achieve this?
Based on your description, I think you'd better run a dedicated daemon (either a 3rd party one or one written by yourself) yourself which does the background stuff.
The rationale behind why I don't think it right to do that in your PHP code is:
If you fork it from your server code, you have to install something else and since it is a folk, that process you are gonna spawn will inherit some data not useful at all from the parent process
With a dedicated daemon, it's easier for you to track the status of each job and more importantly, not a bunch of processes will be spawned if you just fork a new process for each job in the server code.

PHP or Java to handle long running background requests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This is a design question and I appreciate your insight / advise. I understand this question may have different answers based on experience and I am merely trying to seek some guidance before I make a selection on how I proceed.
Background -
My application is primarily built on LAMP stack - Linux, Apache, MySQL and PHP. I also use jQuery to client side scripting and the application is fairly simple and executes very fast. I am also using CakePHP framework
Scenario #1 -
The user clicks a link on the web page
The click triggers an AJAX call to a PHP script on the server
The PHP script make a cURL request to another web address to process some information and usually returns in 4-5 seconds
Upon return the PHP script completes execution and terminates
Question -
I keep hearing that PHP is synchronous and will hang until this request is finished - so if multiple users make multiple requests in the above scenario will PHP hang until each request is processed sequentially or does Apache take care of spawning multiple threads to process each web request separately?
I am trying to figure out a way to better handle this - even if it means I should step outside of PHP. Would you recommend I use PERL scripting to handle to cURL request and just have PHP fork a shell thread and exit or would it be better to create a JAVA servlet that the AJAX can call since JAVA is multi-threaded it can handle this on the same.
I am reading up on pThreads - is this a scenario where pThreads would be
Scenario 2
User uploads a zip file and clicks the process button and then quits the application
Upon clicking the process button an AJAX request is sent to the server to process the zip file. The PHP script receiving this request has ignore_user_abort enabled so it does execute even if the user quits.
However processing of this zip file can take multiple minutes as it involves multiple cURL calls and SOAP calls across web servers
Once processing is done, the PHP script updates the database and terminates
Question
Again similar to the above question, is this something that will be blocking in nature if multiple people upload files at the same time?
Assuming PHP would queue all the various requests - would this cause a timeout scenario and loss of requests?
Is this something better done with PERL/JAVA etc?
Thank you for your advise and insight
The short answer is
Scenario #1
all / most languages are synchronous, that said running ajax is asynchronous and by extension running php by ajax is asynchronous. The thing is here you are confusing "synchronous" which in this context means block until an operation is finished or process blocking, with parallel processing or even multi-threading.
again multi-threading is quite different then parallel processing, php is quite capable of running dozens of parallel processes. Is it the best language for it, probably not but it can do it with as little effort as running a shell script with exec and a command like this exec(usr/bin/php -f pathtophpfile/index.php arg1 > /dev/null & ); on linux. multi-threading is defined as this:
Multithreading is the ability of a program or an operating system process
to manage its use by more than one user at a time and to even manage
multiple requests by the same user without having to have multiple
copies of the programming running in the computer
Parallel processing is defined as this
Parallel processing is the simultaneous use of more than one CPU or
processor core to execute a program or multiple computational threads.
So while technically php cant do either of these, you can run multiple copies of php at the same time on the same machine, much in the same way as you can manually open multiple shell windows and run commands in each of them. Is it parallel processing or multi-threading? No, it's just running multiple copies of PHP at the same time.
But the biggest challenge with any " multi-threaded or parallel process " is race conditions. If you are careful to avoid them you will be fine. Race conditions are like this
process1 loads text.txt
process1 makes changes
process2 loads text.txt - before process1 has saved its data
process2 makes changes
process2 saves changes
process1 saves changes
Now you will lose any changes made by process2 because process1 had the data in memory and never accounted for process2 changing it. This is also what I would call a concurrency issue, they are basically the same thing. Another thing to look out for if using CRON or some other rudimentary queuing method, is not pulling the same job with multiple processes.
Also debugging can be a challenge, this is true of any background process and not specific to php. The simplest thing to do here is use a file to log your output to using things like ob_start() & $var = ob_get_clean() ( output buffering) and recording that. It's also useful to use a shutdown handler to log errors such as
http://php.net/manual/en/function.register-shutdown-function.php
Of course these are over simplified examples, explanations but that is the gist of it.
Scenario #2
how would it be? as I mentioned php and Apache can serve over 200 clients at once, another request is just another connection to Apache ( when using ajax or CURL ) but its basically the same even when just using the CLI (command line interface). There is no inherent reason you cant run several dozen php processes at once.
How would it Queue it, they just execute again like oping multiple tabs in a browser. As for a timeout, there are always resource limits on a server no matter what language you use. You could use a queuing system to insure that only a few files are processed at a give time, this could be as simple as cron and a database table with some status column, such as queued, running, complete. then the cron script runs one job marked as queued, marks it as running while running, marks it complete when done, rinse and repeat.
That is a matter of opinion and more so a matter of your ability with those languages.
I'm actually building a system in php that takes one csv file and breaks it into 25000 row chunks ( without re-writing separate files, just reading from offsets in the same file with multiple threads ). These chunks are then processed in parallel by up to 10 workers and then aggregated back together, and then some reports and emails ect are generated. Is it easy to do, no. Is it possible, sure is.
The system I am building for example takes a file with say 1million + rows, and queries a database with over 700k records. It works a bit like this
Job Preprocess ( one process creates multiple chunks )
create a job file
calculate ofsets
queue ( in rabbitMq ) multiple jobs
Process ( multiple processes each handle one or more chunk )
load data from queue
access input_file.csv at offset and read to end of offset
generate a numbered result file such as 0.csv, 1.csv for each chunk
Aggregation ( one process only, receives the bits of the job )
load previously saved job file ( from step 1 )
as each chunk completes record that in job file
when all chunks are done, compact all the results from the numbered files in order.
The trick here is that the multiple process part ( step 2 ) doesn't touch that job file in step one ( or it would encounter race conditions ), further only one process receives all of the chunks for a job. Once all the chunks are received, we compact them into one file do some clean up and then send out emails etc..
With this I have ran a file with 1 million rows in under 2 minutes. Using a single thread / process it takes about 15 minutes to run the same file.
So ( again ) I assure you It can be done, it's tricky and you have to be very careful on how you move your data around but it's not impossible to do these things in php. PHP and modern hardware for that matter can handle thousands of operations a second. Usually the bottle necks are bad indexing in a database or waiting on network connections ect...
If you plan on doing some real heavy duty work I'd suggest looking into a queuing or messaging system like I use ( RabbitMq ) but that might be overkill in your case. I use the queuing system to help keep the process flow sane and avoid race conditions, basically it's sole purpose for me is to organize the data flow.
Scenario #1
1) PHP is synchronous, but the question is confused. PHP executes instructions synchronously, normally, however Apache defines the processing model. Apache will reuse or spawn a worker process or thread to handle the request, up to the configured limit.
2) The way you are handling it is fine, you might want to try and reduce the amount of time it takes to update the user interface, because 4-5 seconds is rather long.
3) I will talk a little about using threads at the frontend.
Using threads at the frontend doesn't make sense. As mentioned, your webserver has a defined processing model, it is designed to scale with that model, creating user threads as the result of a web request disrupts that model. Even if user code creates a reasonable number of threads, for example 8, if 100 clients come along at once, you will be asking your hardware to execute 800 threads concurrently.
That is clearly a bad idea !
Scenario #2
1) The same answer as #1.1, it's the processing model of the server that handles multiple clients.
2) The same answer as question 1 in both scenarios.
3) That's entirely a matter of opinion.
The problem you seem to have is essentially the same in both scenarios.
Advice
Don't make anything more complex than it has to be; in both scenarios, the problem is your receiving server side code responds slower than is desirable.
In the case where you have many HTTP requests to make to process a request, your code is I/O bound, don't go straight to multi-processing or multi-threading at all, try non-blocking I/O first, this is simpler, more accessible, more suitable, and scales with PHP.
In the case where you have code that is CPU bound, for example, you have solved the I/O problem, and are making all your requests using non-blocking I/O, but once data is downloaded, it requires considerable processing to be used. Then you might think about using multiple processes or threads.
Whatever happens, you should not use multi-threading at the frontend, what you want to do is isolate those parts of the application that require multi-threading and communicate with this isolated sub-application using some sane form of RPC.

How do php 'daemons' work?

I'm learning php and I'd like to write a simple forum monitor, but I came to a problem. How do I write a script that downloads a file regularly? When the page is loaded, the php is executed just once, and if I put it into a loop, it would all have to be ran before the page is finished loading. But I want to, say, download a file every minute and make a notification on the page when the file changes. How do I do this?
Typically, you'll act in two steps :
First, you'll have a PHP script that will run every minute -- using the crontab
This script will do the heavy job : downloading and parsing the page
And storing some information in a shared location -- a database, typically
Then, your webpages will only have to check in that shared location (database) if the information is there.
This way, your webpages will always work :
Even if there are many users, only the cronjob will download the page
And even if the cronjob doesn't work for a while, the webpage will work ; worst possible thing is some information being out-dated.
Others have already suggested using a periodic cron script, which I'd say is probably the better option, though as Paul mentions, it depends upon your use case.
However, I just wanted to address your question directly, which is to say, how does a daemon in PHP work? The answer is that it works in the same way as a daemon in any other language - you start a process which doesn't end immediately, and put it into the background. That process then polls files or accepts socket connections or somesuch, and in so doing, accepts some work to do.
(This is obviously a somewhat simplified overview, and of course you'd typically need to have mechanisms in place for process management, signalling the service to shut down gracefully, and perhaps integration into the operating system's daemon management, etc. but the basics are pretty much the same.)
How do I write a script that downloads
a file regularly?
there are shedulers to do that, like 'cron' on linux (or unix)
When the page is loaded, the php is
executed just once,
just once, just like the index.php of your site....
If you want to update a page which is show in a browser than you should use some form of AJAX,
if you want something else than your question is not clear to /me......

Close connection in PHP but keep executing script

Anyone know how to close the connection (besides just flush()?), but keep executing some code afterwards.
I don't want the client to see the long process that may occur after the page is done.
You might want to look at pcntl_fork() -- it allows you to fork your current script and run it in a separate thread.
I used it in a project where a user uploaded a file and then the script performed various operations on it, including communicating with a third-party server, which could take a long time. After the initial upload, the script forked and displayed the next page to the user, and the parent killed itself off. The child then continued executing, and was queried by the returned page for its status using AJAX. it made the application much more responsive, and the user got feedback as to the status while it was executing.
This link has more on how to use it:
Thorough look at PHP's pcntl_fork() (Apr 2007; by Frans-Jan van Steenbeek)
If you can't use pcntl_fork, you can always fall back to returning a page quickly that fires an AJAX request to execute more items from a queue.
mvds reminds the following (which can apply in a specific server configuration): Don't fork the entire apache webserver, but start a separate process instead. Let that process fork off a child which lives on. Look for proc_open to get full fd interaction between your php script and the process.
I don't want the client to see the
long process that may occur after the
page is done.
sadly, the page isn't done until after the long process has finished - hence what you ask for is impossible (to implement in the way you infer) I'm afraid.
The key here, pointed to by Jhong's answer and inversely suggested by animusen's comment, is that the whole point of what we do with HTTP as web developers is to respond to a request as quickly as possible /end - that's it, so if you're doing anything else, then it points to some design decision that could perhaps have been a little better :)
Typically, you take the additional task you are doing after returning the 'page' and hand it over to some other process, normally that means placing the task in a job queue and having a cli daemon or a cron job pick it up and do what's needed.
The exact solution is specific to what you're doing, and the answer to a different (set of) questions; but for this one it comes down to: no you can't close the connection, and one would advise you look at refactoring the long running process out of that script / page.
Take a look at PHP's ignore_user_abort-setting. You can set it using the ignore_user_abort() function.
An example of (optional) use has been given (and has been reported working by the OP) in the following duplicate question:
close a connection early (Sep 2008)
It basically gives reference to user-notes in the PHP manual. A central one is
Connection Handling user-note #71172 (Nov 2006)
which is also the base for the following two I'd like to suggest you to look into:
Connection Handling user-note #89177 (Feb 2009)
Connection Handling user-note #93441 (Sep 2009)
Don't fork the entire apache webserver, but start a separate process instead. Let that process fork off a child which lives on. Look for proc_open to get full fd interaction between your php script and the process.
We solved this issue by inserting the work that needs to be done into a job queue, and then have a cron-script pick up the backend jobs regularly. Probably not exactly what you need, but it works very well for data-intensive processes.
(you could also use Zend Server's job queue, if you've got a wad of cash and want a tried-and-tested solution)

PHP: Multithreaded PHP / Web Services?

Greetings All!
I am having some troubles on how to execute thousands upon thousands of requests to a web service (eBay), I have a limit of 5 million calls per day, so there are no problems on that end.
However, I'm trying to figure out how to process 1,000 - 10,000 requests every minute to every 5 minutes.
Basically the flow is:
1) Get list of items from database (1,000 to 10,000 items)
2) Make a API POST request for each item
3) Accept return data, process data, update database
Obviously a single PHP instance running this in a loop would be impossible.
I am aware that PHP is not a multithreaded language.
I tried the CURL solution, basically:
1) Get list of items from database
2) Initialize multi curl session
3) For each item add a curl session for the request
4) execute the multi curl session
So you can imagine 1,000-10,000 GET requests occurring...
This was ok, around 100-200 requests where occurring in about a minute or two, however, only 100-200 of the 1,000 items actually processed, I am thinking that i'm hitting some sort of Apache or MySQL limit?
But this does add latency, its almost like performing a DoS attack on myself.
I'm wondering how you would handle this problem? What if you had to make 10,000 web service requests and 10,000 MySQL updates from the return data from the web service... And this needs to be done in at least 5 minutes.
I am using PHP and MySQL with the Zend Framework.
Thanks!
I've had to do something similar, but with Facebook, updating 300,000+ profiles every hour. As suggested by grossvogel, you need to use many processes to speed things up because the script is spending most of it's time waiting for a response.
You can do this with forking, if your PHP install has support for forking, or you can just execute another PHP script via the command line.
exec('nohup /path/to/script.php >> /tmp/logfile 2>&1 & echo $!'), $processId);
You can pass parameters (getopt) to the php script on the command line to tell it which "batch" to process. You can have the master script do a sleep/check cycle to see if the scripts are still running by checking for the process id's. I've tested up to 100 scripts running at once in this manner, at which point the CPU load can get quite high.
Combine multiple processes with multi-curl, and you should easily be able to do what you need.
My two suggestions are (a) do some benchmarking to find out where your real bottlenecks are and (b) use batching and cacheing wherever possible.
Mysqli allows multiple-statement queries, so you could definitely batch those database updates.
The http requests to the web service are more likely the culprit, though. Check the API you're using to see if you can get more info from a single call, maybe? To break up the work, maybe you want a single master script to shell out to a bunch of individual processes, each of which makes an api call and stores the results in a file or memcached. The master can periodically read the results and update the db. (Careful to rotate the data store for safe reading and writing by multiple processes.)
To understand your requirements better, you must implement your solution only in PHP? Or you can interface a PHP part with another part written in another language?
If you could not go for another language, try to perform this update maybe as php script that runs in the background and not through the apache.
You can follow Brent Baisley advice for a simple use case.
If you want to build a robuts solution, then you need to :
set up a representation of the actions in a table in database that will be your process queue;
set up a script that pop this queue and process your action;
set up a cron daemon that run this script every x.
This way you can have 1000 PHP scripts running, using your OS parallelism capabilities and not hanging when ebay is taking to to respond.
The real advantage of this system is that you can fully control the firepower you throw at your task by adjusting :
the number of request one PHP script does;
the order / number / type / priority of the action in the queue;
the number or scripts the cron daemon runs.
Thanks everyone for the awesome and quick answers!
The advice from Brent Baisley and e-satis works nicely, rather than executing the sub-processes using CURL like i did before, the forking takes a massive load off, it also nicely gets around the issues with max out my apache connection limit.
Thanks again!
It is true that PHP is not multithreaded, but it can certainly be setup with multiple processes.
I have created a system that resemebles the one that you are describing. It's running in a loop and is basically a background process. It uses up to 8 processes for batch processing and a single control process.
It is somewhat simplified because i do not have to have any communication between the processes. Everything resides in a database so each process is spawned with the full context taken from the database.
Here is a basic description of the system.
1. Start control process
2. Check database for new jobs
3. Spawn child process with the job data as a parameter
4. Keep a table of the child processes to be able to control the number of simultaneous processes.
Unfortunately it does not appear to be a widespread idea to use PHP for this type of application, and i really had to write wrappers for the low level functions.
The manual has a whole section on these functions, and it appears that there are methods for allowing IPC as well.
PCNTL has the functions to control forking/child processes, and Semaphore covers IPC.
The interesting part of this is that i'm able to fork off actual PHP code, not execute other programs.

Categories