Backgrond:
I currently have a daemon written in PHP. I knew PHP wasn't the best solution to this problem when I wrote it, but it's what I had access to at the time and what I'm doing makes PHP more than ideal.
Actually, I am using two daemons in PHP. Both are simple while(true) loops with set_time_limit(0). One likes to crash more than the other (which isn't a problem because I have a cron that restarts it if it ever crashes) and I'm guessing it's because of the increased network activity.
Anyway, the daemons:
Daemon 1:
This daemon requests information from an external server, loops very intensely through that data (some 10+ foreach's) and inserts it into a database. It is doing this 24/7. It is critical this daemon is running at 11:59pm each day.
Daemon 2:
This daemon requests the same data. However, when this loops through that data if acts upon certain data found and makes a external network request if it does. It makes requests like this fairly often. Probably around once every few minutes if it's running properly (if it crashes and needs to be restarted or freezes, the requests will build up..) This daemon absolutely loves to crash. Crashes are okay, though. This daemon also likes to freeze where it must be killed to start working again.
The problem:
Well, requesting the same data twice (currently like twice per second per script) is extremely inefficient. I need to merge them both into one daemon. However, daemon 1 is critical and needs to be doing it's job. If the more buggy daemon after merging causes the daemon to crash I could have problems.
So, the question:
I'm thinking I could create the new daemon to make the requests outside of the script. What I mean is when the new daemon needs to make a network request (that would really slow down the script and likely cause more issues) it calls another script (that wouldn't block the main script). So for example, the new daemon needs to make 20 network requests, it can send those 20 requests all at the same time by calling another script to handle them. This takes the work from the daemon and likely will cause less crashes and I will not need to request the same data twice.
Related
I have about 35 cron jobs right now. Most of them are PHP scripts that either scrape or do some calculations. The scripts also loop over 10-20 different servers to do those scrapes. (They are different countries so they have to be separate calls).
So we have 30 scripts, each has a loop over 20 servers and therefore take about 5-15 minutes to run per script. I have each script spaced out right now.
But is it better to have 80 individual scripts run instead of 35 scripts that loop and take a while? Each script would take maybe 1-2 minutes instead of 10-15min.
That would of course spawn a ton more PHP processes. Is there any issue or limit with 10-15 or more PHP processes running at once?
I'm running a cloud server performance on Rackspace.
Personally if the jobs need to complete in a certain order I would make it as linear as possible.....it might take longer but I always err . The side of data accuracy.
It depends.
If you are creating more processes that will be running at the same time you are going to increase your overall memory footprint. Each process will carry it's own overhead of memory for the process to run, and to load any libraries needed for it's process. (aside from whatever it needs to do whatever it does). You will also more than have twice as many script to monitor that they are successfully running all the time.
However in creating more processes you will be able to speed things us since you are essentially creating a multi-thread. Allowing one process to continue while another is blocking waiting for i/o.
If each script doesn't have a dependency on another, breaking them into smaller scripts should be fine. If you can handle monitoring more scripts, and the server can handle it, then I would do it.
If scripts do have dependencies, or if you would have to run so many at the same time you server usage maxes out, keep them together.
That being said, I would also try to optimize the script, make sure there isn't something you can do to make them faster without create more processes.
Depending on how you have the servers setup, I would run them at once. In addition, I would also run them at night, off hours when the web servers aren't in use and not during business operations unless your web app depends on it. If you're on a Cloud server on Rackspace I wouldn't worry about bandwidth although increasing your ram could be an issue further down the road.
Spawning a ton more PHP process shouldn't be a worry if you have sufficient amount of ram; there is no limitation on the linux side.
a) Figure out which cron needs to run in which order
b) Order the cron to be run at night, around mid-night
c) Run and fireoff the 80 scripts at once
it would also be a good idea to send you an email with cron results or report that it all went through successfully, based on the batch but not individual cron.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This is a design question and I appreciate your insight / advise. I understand this question may have different answers based on experience and I am merely trying to seek some guidance before I make a selection on how I proceed.
Background -
My application is primarily built on LAMP stack - Linux, Apache, MySQL and PHP. I also use jQuery to client side scripting and the application is fairly simple and executes very fast. I am also using CakePHP framework
Scenario #1 -
The user clicks a link on the web page
The click triggers an AJAX call to a PHP script on the server
The PHP script make a cURL request to another web address to process some information and usually returns in 4-5 seconds
Upon return the PHP script completes execution and terminates
Question -
I keep hearing that PHP is synchronous and will hang until this request is finished - so if multiple users make multiple requests in the above scenario will PHP hang until each request is processed sequentially or does Apache take care of spawning multiple threads to process each web request separately?
I am trying to figure out a way to better handle this - even if it means I should step outside of PHP. Would you recommend I use PERL scripting to handle to cURL request and just have PHP fork a shell thread and exit or would it be better to create a JAVA servlet that the AJAX can call since JAVA is multi-threaded it can handle this on the same.
I am reading up on pThreads - is this a scenario where pThreads would be
Scenario 2
User uploads a zip file and clicks the process button and then quits the application
Upon clicking the process button an AJAX request is sent to the server to process the zip file. The PHP script receiving this request has ignore_user_abort enabled so it does execute even if the user quits.
However processing of this zip file can take multiple minutes as it involves multiple cURL calls and SOAP calls across web servers
Once processing is done, the PHP script updates the database and terminates
Question
Again similar to the above question, is this something that will be blocking in nature if multiple people upload files at the same time?
Assuming PHP would queue all the various requests - would this cause a timeout scenario and loss of requests?
Is this something better done with PERL/JAVA etc?
Thank you for your advise and insight
The short answer is
Scenario #1
all / most languages are synchronous, that said running ajax is asynchronous and by extension running php by ajax is asynchronous. The thing is here you are confusing "synchronous" which in this context means block until an operation is finished or process blocking, with parallel processing or even multi-threading.
again multi-threading is quite different then parallel processing, php is quite capable of running dozens of parallel processes. Is it the best language for it, probably not but it can do it with as little effort as running a shell script with exec and a command like this exec(usr/bin/php -f pathtophpfile/index.php arg1 > /dev/null & ); on linux. multi-threading is defined as this:
Multithreading is the ability of a program or an operating system process
to manage its use by more than one user at a time and to even manage
multiple requests by the same user without having to have multiple
copies of the programming running in the computer
Parallel processing is defined as this
Parallel processing is the simultaneous use of more than one CPU or
processor core to execute a program or multiple computational threads.
So while technically php cant do either of these, you can run multiple copies of php at the same time on the same machine, much in the same way as you can manually open multiple shell windows and run commands in each of them. Is it parallel processing or multi-threading? No, it's just running multiple copies of PHP at the same time.
But the biggest challenge with any " multi-threaded or parallel process " is race conditions. If you are careful to avoid them you will be fine. Race conditions are like this
process1 loads text.txt
process1 makes changes
process2 loads text.txt - before process1 has saved its data
process2 makes changes
process2 saves changes
process1 saves changes
Now you will lose any changes made by process2 because process1 had the data in memory and never accounted for process2 changing it. This is also what I would call a concurrency issue, they are basically the same thing. Another thing to look out for if using CRON or some other rudimentary queuing method, is not pulling the same job with multiple processes.
Also debugging can be a challenge, this is true of any background process and not specific to php. The simplest thing to do here is use a file to log your output to using things like ob_start() & $var = ob_get_clean() ( output buffering) and recording that. It's also useful to use a shutdown handler to log errors such as
http://php.net/manual/en/function.register-shutdown-function.php
Of course these are over simplified examples, explanations but that is the gist of it.
Scenario #2
how would it be? as I mentioned php and Apache can serve over 200 clients at once, another request is just another connection to Apache ( when using ajax or CURL ) but its basically the same even when just using the CLI (command line interface). There is no inherent reason you cant run several dozen php processes at once.
How would it Queue it, they just execute again like oping multiple tabs in a browser. As for a timeout, there are always resource limits on a server no matter what language you use. You could use a queuing system to insure that only a few files are processed at a give time, this could be as simple as cron and a database table with some status column, such as queued, running, complete. then the cron script runs one job marked as queued, marks it as running while running, marks it complete when done, rinse and repeat.
That is a matter of opinion and more so a matter of your ability with those languages.
I'm actually building a system in php that takes one csv file and breaks it into 25000 row chunks ( without re-writing separate files, just reading from offsets in the same file with multiple threads ). These chunks are then processed in parallel by up to 10 workers and then aggregated back together, and then some reports and emails ect are generated. Is it easy to do, no. Is it possible, sure is.
The system I am building for example takes a file with say 1million + rows, and queries a database with over 700k records. It works a bit like this
Job Preprocess ( one process creates multiple chunks )
create a job file
calculate ofsets
queue ( in rabbitMq ) multiple jobs
Process ( multiple processes each handle one or more chunk )
load data from queue
access input_file.csv at offset and read to end of offset
generate a numbered result file such as 0.csv, 1.csv for each chunk
Aggregation ( one process only, receives the bits of the job )
load previously saved job file ( from step 1 )
as each chunk completes record that in job file
when all chunks are done, compact all the results from the numbered files in order.
The trick here is that the multiple process part ( step 2 ) doesn't touch that job file in step one ( or it would encounter race conditions ), further only one process receives all of the chunks for a job. Once all the chunks are received, we compact them into one file do some clean up and then send out emails etc..
With this I have ran a file with 1 million rows in under 2 minutes. Using a single thread / process it takes about 15 minutes to run the same file.
So ( again ) I assure you It can be done, it's tricky and you have to be very careful on how you move your data around but it's not impossible to do these things in php. PHP and modern hardware for that matter can handle thousands of operations a second. Usually the bottle necks are bad indexing in a database or waiting on network connections ect...
If you plan on doing some real heavy duty work I'd suggest looking into a queuing or messaging system like I use ( RabbitMq ) but that might be overkill in your case. I use the queuing system to help keep the process flow sane and avoid race conditions, basically it's sole purpose for me is to organize the data flow.
Scenario #1
1) PHP is synchronous, but the question is confused. PHP executes instructions synchronously, normally, however Apache defines the processing model. Apache will reuse or spawn a worker process or thread to handle the request, up to the configured limit.
2) The way you are handling it is fine, you might want to try and reduce the amount of time it takes to update the user interface, because 4-5 seconds is rather long.
3) I will talk a little about using threads at the frontend.
Using threads at the frontend doesn't make sense. As mentioned, your webserver has a defined processing model, it is designed to scale with that model, creating user threads as the result of a web request disrupts that model. Even if user code creates a reasonable number of threads, for example 8, if 100 clients come along at once, you will be asking your hardware to execute 800 threads concurrently.
That is clearly a bad idea !
Scenario #2
1) The same answer as #1.1, it's the processing model of the server that handles multiple clients.
2) The same answer as question 1 in both scenarios.
3) That's entirely a matter of opinion.
The problem you seem to have is essentially the same in both scenarios.
Advice
Don't make anything more complex than it has to be; in both scenarios, the problem is your receiving server side code responds slower than is desirable.
In the case where you have many HTTP requests to make to process a request, your code is I/O bound, don't go straight to multi-processing or multi-threading at all, try non-blocking I/O first, this is simpler, more accessible, more suitable, and scales with PHP.
In the case where you have code that is CPU bound, for example, you have solved the I/O problem, and are making all your requests using non-blocking I/O, but once data is downloaded, it requires considerable processing to be used. Then you might think about using multiple processes or threads.
Whatever happens, you should not use multi-threading at the frontend, what you want to do is isolate those parts of the application that require multi-threading and communicate with this isolated sub-application using some sane form of RPC.
I have a PHP script on my Apache web server, which starts another several hours running PHP script. Right after the long-lasting script is started no other PHP script requests are handled. The browser just hangs eternally.
The background script crawls other sites and gathers data from ones. Therefore it takes quite long time.
At the same time static pages are got without problems. Also at the same time any PHP script started locally on the server from bash are executed without problems.
CPU and RAM usage are low. In fact it's test server and my requests are only ones being handled.
I tried to decrease Apache processes in order to be able to trace all of them to see where requests are hung. But when I decreased amount of processes to 2 the problem has gone.
I found no errors neither in syslog nor in apache/error.log
What else can I check?
Though I didn't find the reason of Apache hanging I have solved the task in a different way.
I've set a schedule to run a script every 5 minutes. From web script I'm just creating a file with necessary parameters. Script check existence of the file and if it exists it reads its content and deletes to prevent further scheduled start.
I have a PHP script that processes my email subscriptions.
It does something like:
foreach email to be sent:
mailer->send-email
print "Email sent to whoever."
I'm now encountering rate-limiting by my web host. The mailing library has a built in throttler that will sleep to ensure I stay under the rate. However, this could result in the web page taken multiple hours to actually load.
Will the client side browser ever give up on the page loading? Any suggested better solutions to this?
Why is this being done on a webpage load? This should be an off-line back-end process which is scheduled to run. (Look into cron for scheduling tasks.)
Any long running process should be delegated to a back-end service to handle that process. Application interfaces (such as a web page) should respond back to the user as quickly as possible instead of forcing the user to wait (for upwards of an hour?) for a response.
The application can track progress, usually by means of some shared data source (a simple database, for example), of the back-end process and present that progress to the user. That's fine. But the process itself should happen outside of the application.
For example, at a high level...
Have a PHP script scheduled to run to process the emails.
When the script starts, save a record to a database indicating that it's started.
Each time the script reaches a milestone of some kind, update the database record to indicate this.
When the script finishes, update the database record to indicate this.
Have a web application which checks for that database record and shows the user the current status of the back-end process.
You may not care, but even if you coerce this script into staying alive, you shouldn't purposely run a long running script through the webserver. Webserver's use resource heavy threads or processes to run your script, and they have a finite amount of them available to server web requests. A long running script basically takes one of them out of the pool of processes that can be used to server web visitors.
Instead, use a cron job which executes the php binary directly. Specifically, do not use wget or lynx or any other web browser like program as part of the cron job, because those methods run the script through the webserver. The cron command should include something like
php /full/path/to/the/script.php
When executing proc_nice(), is it actually nice'ing Apache's thread?
If so, and if the current user (non-super user) can't renice to its original priority is killing the Apache thread appropriate (apache_child_terminate) on an Apache 2.0x server?
The issue is that I am trying to limit the impact of an app that allows the user to run Ad-Hack queries. The Queries can be massive and the resultant transform on the data requires a lot of Memory and CPU.
I've already re-written the process to be more stream based - helping with the memory consumption, but I would also like the process to run a lower priority. However I can't leave the Apache thread in low priority as we have a lot of high-priority web services running on this same box.
TIA
In that kind of situation, a solution if often to not do that kind of heavy work within the Apache processes, but either :
run an external PHP process, using something like shell_exec, for instance -- this is if you must work in synchronous mode (ie, if you cannot execute the task a couple of minutes later)
push the task to a FIFO system, and immediatly return a message to the user saying "your task will be processed soon"
and have some other process (launched via a crontab every minute, for instance) check that FIFO queue
and do the processing it there is something in the queue
That process, itself, can run in low priority mode.
As often as possible, especially if the heavy calculations take some time, I would go for the second solution :
It allows users to get some feedback immediatly : "the server has received your request, and will process it soon"
It doesn't keep Apaches's processes "working" for long : the heavy stuff is done by other processes
If, one day, you need such an amount of processing power that one server is not enough anymore, this kind of system will be easier to scale : just add a second server that'll pick from the same FIFO queue
If your server is really too loaded, you can stop processing from the queue, at least for some time, so the load can get better -- for instance, this can be usefull if your critical web-services are used a lot in a specific time-frame.
Another (nice-looking, but I haven't tried it yet) solution would be to use some kind of tool like, for instance, Gearman :
Gearman provides a generic application
framework to farm out work to other
machines or processes that are better
suited to do the work. It allows you
to do work in parallel, to load
balance processing, and to call
functions between languages. It can be
used in a variety of applications,
from high-availability web sites to
the transport of database replication
events. In other words, it is the
nervous system for how distributed
processing communicates.