I currently have a PHP script that collects similar data from various sources, each data source is scraped and parsed every 120 seconds. At the moment I have 20 data sources, but I expect to integrate another 100 over the coming weeks.
Currently each data source is scraped in it's own thread, there is one main PHP script that will execute other scripts to perform the scraping work. This method allows all sources to be scraped at the same time, but it also puts a strain on the server, and a bottleneck on the database (MySQL).
I'm looking for a way to scale my current application, could I do something like this with AWS? Perhaps each of these scraping scripts could run in their own small server instance, each of these instances would be automatically created by a "main" instance and then die once the script has finished. I don't have any experience with AWS, so I'm not entirely sure if this is possible, or maybe it's just a bad idea.
The main question here is: How can I scale my current scraping script to allow for many new data sources? I'm interested in any solution even if I need to buy additional services.
You need a queueing system
You're describing a sort of worker / queue pattern, with your main server performing both the en-queueing and the worker execution, which of course is going to be a huge strain on your server.
First and foremost, your workers need to be asynchronous: you shouldn't be waiting for something that may or may not come back. You really should take a look at ZeroMQ which, I might add, contains some of the best documentation on the planet. If you're willing to learn, take a look at how this works and follow some tutorials, there are plenty out there. Have your queue taking on new jobs and dispatching others elsewhere (i.e. to other boxes) hosted on your main server.
Horizontal Scaling
You can create some sort of Instance Controller to handle AWS instances. You really just need to sit down and think about your logic (when do I want this many boxes, when do I want to shut them down). The API is pretty simple to use once you get your head around it. Here's some code I wrote a while back to wrap Amazon's SDK for PHP. I'm not sure if it's working 100% with the latest version (I used it around a year ago), but the concepts are there - you have simple methods like startBox() or stopBox() that you call from your queue, and have your box automatically start doing it's stuff once it starts up.
You could use the t1.micro instances from Amazon pricing here, which has a free tier info here up to a certain limit.
Get it working properly, with a loop on your main server deciding how many boxes you need working at any one time given certain circumstances (no. of jobs in your database table, for example), and you'll have theoretically infinite scaling. Here's how I did it for my code:
Tier 1: > 5 jobs, < 10 jobs = 1 box
Tier 2: > 10 jobs, < 20 jobs = 2 boxes
etc. etc.
Advice
Log everything. Log every box coming up, every box coming down. Calculate your costs in your code and store them, maybe in a database, or log them, so you know exactly how much you're spending - your don't want things to get out of hand.
Make sure you open up your DB ports so your instances can talk to your DB to say when a job is done or anything else you need to pass between your "master" box and your "slave" boxes.
Also, if you're paying for web servers, you'll be billed for the hour with aws, so you need to get the time you start the box, and when it's time to shut down, only actually shut it down when 55 minutes or so has passed - you might as well get those extra minutes for what you're paying.
I can't really think of anything else. Do your research, figure out the best way to build a queueing system, and build it with scalability in mind (it can react and change to numbers that you control).
Split your scraping up across multiple instances (say 5 per server) and have them talk to a central DB like Amazon RDS.
No need to kill the instances after you have finished scraping if your doing this every 120 seconds.
Related
I have an application where I intend users to be able to add events at any time, that is, chunks of code that should only run at a specific time in the future determined by user input. Similar to cronjobs, except at any point there may be thousands of these events that need to be processed, each at its own specific due time. As far as I understand, crontab would not be able to handle them since it is not meant to have massive number of cronjobs, and additionally, I need precision to the second, and not the minute. I am aware it is possible to programmatically add cronjobs to crontab, but again, it would not be enough for what I'm trying to accomplish.
Also, I need these to be real time, faking them by simply checking if there are due items whenever the pages are visited is not a solution; they should also fire even if no pages are visited by their due time. I've been doing some research looking for a sane solution, I read a bit about queue systems such as gearman and rabbitmq but a FIFO system would not work for me either (the order in which the events are added is irrelevant, since it's perfectly possible one adds an event to fire in 1 hour, and right after another that is supposed to trigger in 10 seconds)
So far the best solution that I found is to build a daemon, that is, a script that will run continuously checking for new events to fire. I'm aware PHP is the devil, leaks memory and whatnot, but I'm still hoping nonetheless that it is possible to have a php daemon running stably for weeks with occasional restarts, so as long as I spawn new independent processes to do the "heavy lifting", the actual processing of the events when they fire.
So anyway, the obvious questions:
1) Does this sound sane? Is there a better way that I may be missing?
2) Assuming I do implement the daemon idea, the code naturally needs to retrieve which events are due, here's the pseudocode of how it could look like:
while 1 {
read event list and get only events that are due
if there are due events
for each event that is due
spawn a new php process and run it
delete the event entry so that it is not run twice
sleep(50ms)
}
If I were to store this list on a MySQL DB, and it certainly seems the best way, since I need to be able to query the list using something on the lines of "SELECT * FROM eventlist where duetime >= time();", is it crazy to have the daemon doing a SELECT every 50 or 100 milliseconds? Or I'm just being over paranoid, and the server should be able to handle it just fine? The amount of data retrieved in each iteration should be relatively small, perhaps a few hundred rows, I don't think it will amount for more than a few KBs of memory. Also the daemon and the MySQL server would run on the same machine.
3) If I do use everything described above, including the table on a MySQL DB, what are some things I could do to optimize it? I thought about storing the table in memory, but I don't like the idea of losing its contents whenever the server crashes or is restarted. The closest thing I can think of would be to have a standard InnoDB table where writes and updates are done, and another, 1:1 mirror memory table where reads are performed. Using triggers it should be doable to have the memory table mirror everything, but on the other hand it does sound like a pain in the ass to maintain (fubar situations can easily happen if some reason the tables get desynchronized).
I have a challenge I don't seem to get a good grip on.
I am working on an application that generates reports (big analysis from database but that's not relevant here). I have 3 identical scripts that I call "process scripts".
A user can select multiple variables to generate a report. If done, I need one of the three scripts to pick up the task and start generating the report. I use multiple servers so all three of them can work simultaneously. When there is too much work, a queue will start so the first "process script" to be ready can pick up the next and so on.
I don't want to have these scripts go to the database all the time, so I have a small file "thereiswork.txt". I want the three scripts to read the file and if there is something to do go do it. If not, do nothing.
At first, I just randomly let a "process script" to be chosen & they all have their own queue. However, I now see that in some cases 1 process script has a queue of hours while the other 2 are doing nothing. Just because they had the "luck" of not getting very big reports to generate so I need a more fair solutions to equally balance the work.
How can I do this? Have a queue multiple scripts can work on?
PS
I use set_time_limit(0); for these scripts and they all currently are in a while() loop, and sleep(5) all the time...
No, no, no.
PHP does not have the kind of sophisticated lock management facilities to support concurrent raw file access. Few languages do. That's not to say it's impossible to implement them (most easily with mutexes).
I don't want to have these scripts go to the database all the time
DBMS provide great support for concurrent access. And while there is an overhead in perfroming an operation on the DB, it's very small in comparison to the amount of work which each request will generate. It's also a very convenient substrate for managing the queue of jobs.
they all have their own queue
Why? using a shared queue on a first-come, first-served basis will ensure the best use of resources.
At first, I just randomly let a "process script" to be chosen
This is only going to distribute work evenly with a very large number of jobs and a good random number generator. One approach is to shard data (e.g. instance 1 picks up jobs where mod(job_number, number_of_instances)=0, instance picks up jobs where mod(job_number, number_of_instances)=1....) - but even then it doesn't make best use of available resources.
they all currently are in a while() loop, and sleep(5) all the time
No - this is wrong too.
It's inefficient to have the instances constantly polling an empty queue - so you implement a back-ofr plan, e.g.
$maxsleeptime=100;
$sleeptime=0;
while (true) {
$next_job=get_available_job_from_db_queue();
if (!$next_job) {
$sleeptime=min($sleeptime*2, $maxsleeptime);
sleep($sleeptime);
} else {
$sleeptime=0;
process_job($next_job);
mark_job_finished($next_job);
}
}
No job is destined for a particular processor until that processor picks it up from the queue. By logging sleeptime (or start and end of processing) it's also a lot easier to see when you need to add more processor scripts - and if you handle the concurrency on the database, then you don't need to worry about configuring each script to know about the number of other scripts running - you can add and retired instances as required.
For this task, I use the Gearman job server. Your PHP code sends out jobs and you have a background script running to pick them up. It comes down to a solution similar to symcbean's, but the dispatching does not require arbitrary sleeps. It waits for events instead and essentially wakes up exactly when needed.
It comes with an excellent PHP extension and is very well documented. Most examples are in PHP too, although it works transparently with other languages too.
http://gearman.org/
Background
In one of our project, we need to run some massive task occasionally, e.g., generate reports, send numbers of notification emails. Sometimes it causes noticeable lag when such massive task is being run. So we are thinking of one possible solution.
Some thoughts
Set crontab to run a backend script every 10 minutes.
Collect the cpu usage info, I found http://phpsysinfo.sourceforge.net/phpsysinfo/index.php?disp=dynamic , but I'm not sure if there is a better way?
If there are contiguous usage lower than a specific value, or the first task in the queue reaches its deadline, the script will get a certain number of tasks from the queue and run.
There are different types of massive task: e.g.,
User can request certain type of report
Notification emails
Cleaning data in database
...
I am wondering if this idea is worth trying?
Is there any problem, or is there some other better solution?
This works up to a point but struggles if you are running anything where access is required 24 hours a day (like an internationally used site).
You may wish to replicate your database and then run your heavy queries off of that - or investigate a form of data warehousing.
What is a data warehouse?
Here's the situation. I am scrapping a website to get the data from it's articles using a robots page supplied by that website (list of URLs pointing to every article that's posted on the site). So far, I do a database merge to 'upsert' the URLs into my table. I know that each scrapping run will take a good while cause there's over 1400 articles to parse. I need to write an algorithm that will only do a small chunk of the jobs on cron at a time so it doesn't overload my server, etc.
Edit: I think I should mention that I'm using drupal 7. Also, this has to be an ongoing script that happens over time, I'm not so worried about the time it takes for the initial fill of the database. The robots page is dynamic, URLs get added there periodically as articles are added. I'm using hook_cron() currently for this, but I'm open to better methods if there's something better than that for doing it.
You can use the Drupal queue operations API to enqueue each page to scrap as queue item. You can, but are not required, declare your queue as cron-executed. Drupal will then take cares of executing as much queue item at each cron run without reaching the queue declared maximum execution time.
See aggregator_cron for an example of item en-queuing. And aggregator_cron_queue_info for the declaration that will let Drupal process these queued items during its cron.
If queue processing during normal Drupal cron is an issue, you can process your queue independently with the help of modules like Waiting Queue or Beanstalkd integration.
Most likely the http overhead of fetching each article will vastly outweigh the overhead of doing the database operations. Just don't fetch too many articles in parallel and you should be fine. Most webmasters frown on scrapers, especially when they're doing 10, 20, 500+ parallel fetches.
So, you already have the urls in your database. Have a status column in that table - scraped or not. The cron can kick off every so often grabbing the next url that has not been scraped from the table and marking it as scraped.
I have a PHP script that grabs data from an external service and saves data to my database. I need this script to run once every minute for every user in the system (of which I expect to be thousands). My question is, what's the most efficient way to run this per user, per minute? At first I thought I would have a function that grabs all the user Ids from my database, iterate over the ids and perform the task for each one, but I think that as the number of users grow, this will take longer, and no longer fall within 1 minute intervals. Perhaps I should queue the user Ids, and perform the task individually for each one? In which case, I'm actually unsure of how to proceed.
Thanks in advance for any advice.
Edit
To answer Oddthinking's question:
I would like to start the processes for each user at the same time. When the process for each user completes, I want to wait 1 minute, then begin the process again. So I suppose each process for each user should be asynchronous - the process for user 1 shouldn't care about the process for user 2.
To answer sims' question:
I have no control over the external service, and the users of the external service are not the same as the users in my database. I'm afraid I don't know any other scripting languages, so I need to use PHP to do this.
Am I summarising correctly?
You want to do thousands of tasks per minute, but you are not sure if you can finish them all in time?
You need to decide what do when you start running over your schedule.
Do you keep going until you finish, and then immediately start over?
Do you keep going until you finish, then wait one minute, and then start over?
Do you abort the process, wherever it got to, and then start over?
Do you slow down the frequency (e.g. from now on, just every 2 minutes)?
Do you have two processes running at the same time, and hope that the next run will be faster (this might work if you are clearing up a backlog the first time, so the second run will run quickly.)
The answers to these questions depend on the application. Cron might not be the right tool for you depending on the answer. You might be better having a process permanently running and scheduling itself.
So, let me get this straight: You are querying an external service (what? SOAP? MYSQL?) every minute for every user in the database and storing the results in the same database. Is that correct?
It seems like a design problem.
If the users on the external service are the same as the users in your database, perhaps the two should be more closely configured. I don't know if PHP is the way to go for syncing this data. If you give more detail, we could think about another solution. If you are in control of the external service, you may want to have that service dump it's data or even write directly to the database. Some other syncing mechanism might be better.
EDIT
It seems that you are making an application that stores data for a user that can then be viewed chronologically. Otherwise you may as well just fetch the data when the user requests it.
Fetch all the user IDs in go.
Iterate over them one by one (assuming that the data being fetched is unique to each user) and (you'll have to be creative here as PHP threads do not exist AFAIK) call a process for each request as you want them all to be executed at the same time and not delayed if one user does not return data.
Said process should insert the data returned into the db as soon as it is returned.
As for cron being right for the job: As long as you have a powerful enough server that can handle thousands of the above cron jobs running simultaneously, you should be fine.
You could get creative with several PHP scripts. I'm not sure, but if every CLI call to PHP starts a new PHP process, then you could do it like that.
foreach ($users as $user)
{
shell_exec("php fetchdata.php $user");
}
This is all very heavy and you should not expect to get it done snappy with PHP. Do some tests. Don't take my word for it.
Databases are made to process BULKS of records at once. If you're processing them one-by-one, you're looking for trouble. You need to find a way to batch up your "every minute" task, so that by executing a SINGLE (complicated) query, all of the affected users' info is retrieved; then, you would do the PHP processing on the result; then, in another single query, you'd PUSH the results back into the DB.
Based on your big-picture description it sounds like you have a dead-end design. If you are able to get it working right now, it'll most likely be very fragile and it won't scale at all.
I'm guessing that if you have no control over the external service, then that external service might not be happy about getting hammered by your script like this. Have you approached them with your general plan?
Do you really need to do all users every time? Is there any sort of timestamp you can use to be more selective about which users need "updates"? Perhaps if you could describe the goal a little better we might be able to give more specific advice.
Given your clarification of wanting to run the processing of users simultaneously...
The simplest solution that jumps to mind is to have one thread per user. On Windows, threads are significantly cheaper than processes.
However, whether you use threads or processes, having thousands running at the same time is almost certainly unworkable.
Instead, have a pool of threads. The size of the pool is determined by how many threads your machine can comfortable handle at a time. I would expect numbers like 30-150 to be about as far as you might want to go, but it depends very much on the hardware's capacity, and I might be out by another order of magnitude.
Each thread would grab the next user due to be processed from a shared queue, process it, and put it back at the end of the queue, perhaps with a date before which it shouldn't be processed.
(Depending on the amount and type of processing, this might be done on a separate box to the database, to ensure the database isn't overloaded by non-database-related processing.)
This solution ensures that you are always processing as many users as you can, without overloading the machine. As the number of users increases, they are processed less frequently, but always as quickly as the hardware will allow.