Migrate MySQL data, speed & efficiency - php

I had to change the blueprint of my webapplication to decrease loading time (http://stackoverflow.com/questions/5096127/best-way-to-scale-data-decrease-loading-time-make-my-webhost-happy).
This change of blueprint implies that the data of my application has to be migrated to this new blueprint (otherwise my app won't work). To migrate all my MySQL records (thousands of records), I wrote a PHP/MySQL script.
Opening this script in my browser doesn't work. I've set the time limit of the script to 0 for unlimited loading time, but after a few minutes the script stops loading. A cronjob is also not really an option: 1) strange enough it doesn't load, but the biggest problem: 2) I'm afraid this is going to cost too much resources of my shared server.
Do you know a fast and efficient way to migrate all my MySQL records, using this PHP/MySQL script?

You could try PHP's "ignore_user_abort". It's a little dangerous in that you need SOME way to end it's execution, but it's possible your browser is aborting after the script takes too long.

I solved the problem!
Yes, it will take a lot of time, yes, it will cause an increase in server load, but it just needs to be done. I use the errorlog to check for errors while migrating.
How?
1) I added ignore_user_abort(true); and set_time_limit(0); to make sure the scripts keeps running on te server (stops when the while() loop is completed).
2) Within the while() loop, I added some code to be able to stop the migration script by creating a small textfile called stop.txt:
if(file_exists(dirname(__FILE__)."/stop.txt")) {
error_log('Migration Stopped By User ('.date("d-m-Y H:i:s",time()).')');
break;
}
3) Migration errors and duplicates are logged into my errorlog:
error_log('Migration Fail => UID: '.$uid.' - '.$email.' ('.date("d-m-Y H:i:s",time()).')');
4) Once migration is completed (using mail()), I receive an email with the result of migration, so I don't have to check this manually.
This might not be the best solution, but it's a good solution to work with!

Related

Prevent PHP script using up all resources while it runs?

I have a daily cron job which takes about 5 minutes to run (it does some data gathering and then various database updates). It works fine, but the problem is that, during those 5 minutes, the site is completely unresponsive to any requests, HTTP or otherwise.
It would appear that the cron job script takes up all the resources while it runs. I couldn't find anything in the PHP docs to help me out here - how can I make the script know to only use up, say, 50% of available resources? I'd much rather have it run for 10 minutes and have the site available to users during that time, than have it run for 5 minutes and have user complaints about downtime every single day.
I'm sure I could come up with a way to configure the server itself to make this happen, but I would much prefer if there was a built-in approach in PHP to resolving this issue. Is there?
Alternatively, as plan B, we could redirect all user requests to a static downtime page while the script is running (as opposed to what's happening now, which is the page loading indefinitely or eventually timing out).
A normal script can't hog up 100% of resources, resources get split over the processes. It could slow everything down intensly, but not lock all resources in (without doing some funky stuff). You could get a hint by doing top -s in your commandline, see which process takes up a lot.
That leads to conclude that something locks all further processes. As Arkascha comments, there is a fair chance that your database gets locked. This answer explains which table type you should use; If you do not have it set to InnoDB, you probally want that, at least for the locking tables.
It could also be disk I/O if you write huge files, try to split it into smaller read/writes or try to place some of the info (e.g. if it are files with lists) to your database (assuming that has room to spare).
It could also be CPU. To fix that, you need to make your code more efficient. Recheck your code, see if you do heavy operations and try to make those smaller. Normally you want this as fast as possible, now you want them as lightweight as possible, this changes the way you write code.
If it still locks up, it's time to debug. Turn off a large part of your code and check if the locking still happens. Continue turning on code untill you notice locking. Then fix that. Try to figure out what is costing you so much. Only a few scripts require intense resources, it is now time to optimize. One option might be splitting it into two (or more) steps. Run a cron that prepares/sanites the data, and one that processed the data. These dont have to run at syncronical, there might be a few minutes between them.
If that is not an option, benchmark your code and improve as much as you can. If you have a heavy query, it might improve by selecting only ID's in the heavy query and use a second query just to fetch the data. If you can, use your database to filter, sort and manage data, don't do that in PHP.
What I have also implemented once is a sleep every N actions.
If your script really is that extreme, another solution could be moving it to a time when little/no visitors are on your site. Even if you remove the bottleneck, nobody likes a slow website.
And there is always the option of increasing your hardware.
You don't mention which resources are your bottleneck; CPU, memory or disk I/O.
However if it is CPU or memory you can do something this in you script:
http://php.net/manual/en/function.sys-getloadavg.php
http://php.net/manual/en/function.memory-get-usage.php
$yourlimit = 100000000;
$load = sys_getloadavg();
if ($load[0] > 0.80 || memory_get_usage() > $yourlimit) {
sleep(5);
}
Another thing to try would be to set your process priority in your script.
This requires SU though, which should be fine for a cronjob?
http://php.net/manual/en/function.proc-nice.php
proc_nice(50);
I did a quick test for both and it work like a charm, thanks for asking I have cronjob like that as well and will implement it. It looks like the proc_nice only will do fine.
My test code:
proc_nice(50);
$yourlimit = 100000000;
while (1) {
$x = $x+1;
$load = sys_getloadavg();
if ($load[0] > 0.80 || memory_get_usage() > $yourlimit) {
sleep(5);
}
echo $x."\n";
}
It really depend of your environment.
If using a unix base, there is built-in tools to limit cpu/priority of a given process.
You can limit the server or php alone, wich is probably not what you are looking for.
What you can do first is to separate your task in a separate process.
There is popen for that, but i found it much more easier to make the process as a bash script. Let''s name it hugetask for the example.
#!/usr/bin/php
<?php
// Huge task here
Then to call from the command line (or cron):
nice -n 15 ./hugetask
This will limit the scheduling. It mean it will low the priority of the task against others. The system will do the job.
You can as well call it from your php directly:
exec("nice -n 15 ./hugetask &");
Usage: nice [OPTION] [COMMAND [ARG]...] Run COMMAND with an adjusted
niceness, which affects process scheduling. With no COMMAND, print the
current niceness. Niceness values range from
-20 (most favorable to the process) to 19 (least favorable to the process).
To create a cpu limit, see the tool cpulimit which has more options.
This said, usually i am just putting some usleep() in my scripts, to slow it down and avoid to create a funnel of data. This is ok if you are using loops in your script. If you slow down your task to run in say 30 minutes, there won't be much issues.
See also proc_nice http://php.net/manual/en/function.proc-nice.php
proc_nice() changes the priority of the current process by the amount
specified in increment. A positive increment will lower the priority
of the current process, whereas a negative increment will raise the
priority.
And sys_getloadavg can also help. It will return an array of the system load in the last 1,5, and 15 minutes.
It can be used as a test condition before launching the huge task.
Or to log the average to find the best day time to launch huge task. It can be susrprising!
print_r(sys_getloadavg());
http://php.net/manual/en/function.sys-getloadavg.php
You could try to delay execution using sleep. Just cause your script to pause between several updates of your database.
sleep(60); // stop execution for 60 seconds
Although this depends a lot on the kind of process you are doing in your script. Maybe or not helpful in your case. Worth a try, so you could
Split your queries
do the updates in steps with sleep inbetween
References
Using sleep for cron process
I could not describe it better than the quote in the above answer:
Maybe you're walking the database of 9,000,000 book titles and updating about 10% of them. That process has to run in the middle of the day, but there are so many updates to be done that running your batch program drags the database server down to a crawl for other users.
So modify the batch process to submit, say, 1000 updates, then sleep for 5 seconds to give the database server a chance to finish processing any requests from other users that have backed up.
Sleep and server resources
sleep resources depend on OS
adding sleep to allevaite server resources
Probably to minimize you memory usage you should process heavy and lengthy operations in batches. If you query the database using an ORM like doctrine you can easily use existing functions
http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/batch-processing.html
It's hard to tell what exactly the issue may be without having a look at your code (cron script). But to confirm that the issue is caused by the cron job you can run the script manually and check website responsiveness. If you notice the site being down when running the cron job then we would have to have a look at your script in order to come up with a solution.
Many loops in your cron script might consume a lot of CPU resources.
To prevent that and reduce CPU usage simply put some delays in your script, for example:
while($long_time_condition) {
//Do something here
usleep(100000);
}
Basically, you are giving the processor some time to do something else.
Also you can use the proc_nice() function to change the process priority. For example proc_nice(20);//very low priority. Look at this question.
If you want to find the bottlenecks in your code you can try to use Xdebug profiler.
Just set it up in your dev environment, start the cron manually and then profile any page. Also you can profile your cron script as well php -d xdebug.profiler_enable=On script.php, look at this question.
If you suspect that the database is your bottleneck than import pretty large dataset (or entire database) in your local database and repeat the steps, logging and inspecting all the queries.
Alternatively if it possible setup the Xdebug on the staging server where the server is as close as possible to production and profile the page during cron execution.

Cron job on a large database with PHP, suggestions needed

I've heard about cron job and don't think the actual creation of it will be that hard to make but I've some concerns about how this will work with a large script.
Without going too much off-topic on my project i will stick with the basics about my situation. I need to make a script that every day performs a CURL fetch for data on a remote website and updates an database for each featured member on my website with it. In short, it's approximatively at this time 1000 times the script need to be executed but it will be a larger number as times goes by.
As you can guess, this will take a long time to preform so i'm worried about how the execution will work in a manner of not crashing in the middle of it.
My first thought was to perhaps split the users into groups and make the executions on a small amount of users each time but don't know how this is manageable ( will read on further about the topic when i got some form of confirmation on this).
So, to my question. Do you think there is any way for me to make this happen and do you perhaps have any suggestions on how to make this to work efficiently? All help i can get is appreciated. Thank you for your time.
bigger cron-jobs with php and mysql needs to be fragmented, since there is no way for you to 'nice' them, (reduce their os priority). Even if you nice the script, the mysql-requests will be executed without this concern.
From what you're describing there's two aspects to consider:
Congestion of network bandwith
Congestion of database throughput
I'd recommend a fragmented solution where you call your script from cron more often, and let the script execute only a small amount of the total job. The job should further be canceled (postponed to next run) if i/o-bandwith or cpu-usage is above any limit that may affect response-time to visitors.
regards,
/t
One Way:
I'm usually against putting logic in the database, but in this case a stored procedure might help. It will run your job faster (since it's a large one) and also you want to lock the tables as you do it. That way, if the script that calls the stored procedure gets hit by cron before the original job was over with it wont edit your database while the first one is running.
The actual time can i not give an
straight answer on but based on
previously experiences this will take
longer then the max execution time.
So solve that problem. There's a reason you can have a different php.ini for the command line interface. Then you can simply focus on processing all users in one script.
I solved this program using the files of cron job as differents cron jobs with small pieces. If you are using PHP you can set a cron job to domain/cronjob1.php, domain/cronjob2.php limiting the database lets say 10 with
$sql="SELECT * FROM table LIMIT 10";
to cronjob 1 and the rest in cronjo2

Running a PHP app on windows - daemon or cron?

I need some implementation advice. I have a MYSQL DB that will be written to remotely for tasks to process locally and I need my application which is written in PHP to execute these tasks imediatly as they come in.
But of course my PHP app needs to be told when to run. I thought about using cron jobs but my app is on a windows machine. Secondly, I need to be constantly checking every few seconds and cron can only do every minute.
I thought of writing a PHP daemon but I am getting consued on hows its going to work and if its even a good idea!
I would appreciate any advice on the best way to do this.
pyCron is a good CRON alternative for Windows:
Since this task is quite simple I would just set up pyCron to run the following script every minute:
set_time_limit(60); // one minute, same as CRON ;)
ignore_user_abort(false); // you might wanna set this to true
while (true)
{
$jobs = getPendingJobs();
if ((is_array($jobs) === true) && (count($jobs) > 0))
{
foreach ($jobs as $job)
{
if (executeJob($job) === true)
{
markCompleted($job);
}
}
}
sleep(1); // avoid eating unnecessary CPU cycles
}
This way, if the computer goes down, you'll have a worst case delay of 60 seconds.
You might also want to look into semaphores or some kind of locking strategy like using an APC variable or checking for the existence of a locking file to avoid race conditions, using APC for example:
set_time_limit(60); // one minute, same as CRON ;)
ignore_user_abort(false); // you might wanna set this to true
if (apc_exists('lock') === false) // not locked
{
apc_add('lock', true, 60); // lock with a ttl of 60 secs, same as set_time_limit
while (true)
{
$jobs = getPendingJobs();
if ((is_array($jobs) === true) && (count($jobs) > 0))
{
foreach ($jobs as $job)
{
if (executeJob($job) === true)
{
markCompleted($job);
}
}
}
sleep(1); // avoid eating unnecessary CPU cycles
}
}
If you're sticking with the PHP daemon do yourself a favor and drop that idea, use Gearman instead.
EDIT: I asked a related question once that might interest you: Anatomy of a Distributed System in PHP.
I'll suggest something out of the ordinary: you said you need to run the task at the point the data is written to MySQL. That implies MySQL "knows" something should be executed.
It sounds like perfect scenario for MySQL's UDF sys_exec.
Basically, it would be nice if MySQL could invoke an external program once something happened to it.
If you use the mentioned UDF, you can execute a php script from within - let's say, INSERT or UPDATE trigger.
On the other hand, you can make it more resource-friendly and create MySQL Event (assuming you're using appropriate version) that would use sys_exec to invoke a PHP script that does certain updates at predefined intervals - that reduces the need for Cron or any similar program that can execute something at predefined intervals.
i would definately not advise to use cronjobs for this.
cronjobs are a good thing and very useful and easy for many purposes, but as you describe your needs, i think they can produce more complications than they do good. here are some things to consider:
what happens if jobs overlap? one takes longer to execute than one minute? are there any shared resources/deadlocks/tempfiles? - the most common method is to use a lock file, and stop the execution if its occupied right at the start of the program. but the program also has to look for further jobs right before it completes. - this however can also get complicated on windows machines because they AFAIK don't support write locks out of the box
cronjobs are a pain in the ass to maintain. if you want to monitor them you have to implement additional logic like a check when the program last ran. this however can get difficult if your program should run only on demand. the best way would be some sort of "job completed" field in the database or delete rows that have been processed.
on most unix based systems cronjobs are pretty stable now, but there are a lot of situatinos where you can break your cronjob system. most of them are based on human error. for example a sysadmin not exiting the crontab editor properly in edit mode can cause all cronjobs to be deleted. a lot of companies also have no proper monitoring system for the reasons stated above and notice as soon as their services experience problems. at this point often nobody has written down/put under version control which cronjobs should run and wild guessing and reconstruction work begins.
cronjob maintaince can be further complicated when external tools are used and the environment is not a native unix system. sysadmins have to gain knowledge of more programs and they can have potential errors.
i honestly think just a small script that you start from the console and let open is just fine.
<?php
while(true) {
$job = fetch_from_db();
if(!$job) {
sleep(10)
} else {
$job->process();
}
}
you can also touch a file (modify modification timestamp) in every loop, and you can write a nagios script that checks for that timestamp getting out of date so you know that your job is still running...
if you want it to start up with the system i recommend a deamon.
ps: in the company i work there is a lot of background activity for our website (crawling, update processes, calculations etc...) and the cronjobs were a real mess when i started there. their were spread over different servers responsible for different tasks. databases were accessed wildly accross the internet. a ton of nfs filesytems, samba shares etc were in place to share resouces. the place was full of single points of failures, bottlenecks and something constantly broke. there were so many technologies involved that it was very difficult to maintain and when something didnt work it needed hours of tracking down the problem and another hour of what that part even was supposed to do.
now we have one unified update program that is responsible for literally everyhing, it runs on several servers and they have a config file that defines the jobs to run. eveyrthing gets dispatched from one parent process doing an infinite loop. its easy to monitor, customice, synchronice and everything runs smoothly. it is redundant, it is syncrhonized and the granularity is fine. so it runs parallel and we can scale up to as many servers as we like.
i really suggest to sit down for enough time and think about everything as a whole and get a picture of the complete system. then invest the time and effort to implement a solution that will serve fine in future and doesnt spread tons of different programs throughout your system.
pps:
i read a lot about the minimum interaval of 1/5 minutes for cronjobs/tasks. you can easily work around that with an arbitrary script that takes over that interval:
// run every 5 minutes = 300 secs
// desired interval: 30 secs
$runs = 300/30; // be aware that the parent interval needs to be a multiple of the desired interval
for($i=0;$i<$runs;$i++) {
$start = time();
system('myscript.php');
sleep(300/10-time()+$start); // compensate the time that the script needed to run. be aware that you have to implement some logic to deal with cases where the script takes longer to run than your interavl - technique and problem described above
}
This looks like a job for a job server ;) Have a look at Gearman. The additional benefit of this approach is, that this is triggered by the remote side, when and only then there is something to do, instead of polling. Especially in intervals smaller than (lets say) 5 min polling is not very effective any more, depending on the tasks the job performs.
The quick and dirty way is to create a loop that continuously checks if there is new work.
Psuedo-code
set_ini("max_execution_time", "3600000000");
$keeplooping = true;
while($keeplooping){
if(check_for_work()){
process_work();
}
else{
sleep(5);
}
// some way to change $keeplooping to false
// you don't want to just kill the process, because it might still be doing something
}
Have you tried windows scheduler(comes with Windows by default)? In this you will need to provide php path and your php file path. It works well
Can't you just write a java/c++ program that will query for you through a set time interval? You can have this included in the list of startup programs so its always running as well. Once a task is found, it can handle it on a separate thread even and process more requests and mark others complete.
The most simple way is to use embed Windows schedule.
Run your script with php-cli.exe with filled php.ini with extensions your script needs.
But I should to say that in practice you don't need so short time interval to run your scheduled jobs. Just make some tests to get best time interval value for yours one. It is not recommended to setup time interval less than a 1 minute.
And another little advise: make some lock file at the beginning of your script (php flock function), check for availability to write into this lock file to prevent working of two or more copies same time and at the end of your script unlink it after unlocking.
If you have to write output result to DB try to use MySQL TRIGGERS instead of PHP. Or use events in MySQL.

Why is it so bad to run a PHP script continuously?

I have a map. On this map I want to show live data collected from several tables, some of which have astounding amounts of rows. Needless to say, fetching this information takes a long time. Also, pinging is involved. Depending on servers being offline or far away, the collection of this data could vary from 1 to 10 minutes.
I want the map to be snappy and responsive, so I've decided to add a new table to my database containing only the data the map needs. That means I need a background process to update the information in my new table continuously. Cron jobs are of course a possibility, but I want the refreshing of data to happen as soon as the previous interval has completed. And what if the number of offline IP addresses suddenly spike and the loop takes longer to run than the interval of the Cron job?
My own solution is to create an infinite loop in PHP that runs by the command line. This loop would refresh the data for the map into MySQL as well as record other useful data such as loop time and failed attempts at pings etc, then restart after a short pause (a few seconds).
However - I'm being repeatedly told by people that a PHP script running for ever is BAD. After a while it will hog gigabytes of RAM (and other terrible things)
Partly I'm writing this question to confirm if this is in fact the case, but some tips and tricks on how I would go about writing a clean loop that doesn't leak memory (If that is possible) wouldn't go amiss. Opinions on the matter would also be appreciated.
The reply I feel sheds the most light on the issue I will mark as correct.
The loop should be in one script which will activate/call the actual script as a different process...much like cron is doing.
That way, even if memory leaks, and non collected memory is accumulating, it will/should be free after each cycle.
However - I'm being repeatedly told by people that a PHP script running for ever is BAD. After a while it will hog gigabytes of RAM (and other terrible things)
This used to be very true. Previous versions of PHP had horrible garbage collection, so long-running scripts could easily accidentally consume far more memory than they were actually using. PHP 5.3 introduced a new garbage collector that can understand and clean up circular references, the number one cause of "memory leaks." It's enabled by default. Check out that link for more info and pretty graphs.
As long as your code takes steps to allow variables to go out of scope at proper times and otherwise unset variables that will no longer be used, your script should not consume unnecessary amounts of memory just because it's PHP.
I don't think its bad, as with anything that you want to run continuously you have to be more careful.
There are libraries out there to help you with the task. Have a look at System_Daemon, which release RC 1 just over a month ago, which allows you to "Set options like max RAM usage".
Rather than running an infinite loop I'd be tempted to go with the cron option you mention in conjunction with a database table entry or flat-file that you'd use to store a "currently active" status bit to ensure that you didn't have overlapping processes attempting to run at the same time.
Whilst I realise that this would mean a minor delay before you perform the next iteration, this is probably a better idea anyway as:
It'll let the RDBMS perform any pending low-priority updates, etc. that may well been on-hold due to the amount of activity that you've been carrying out.
Even if you neatly unset all the temporary variables you've been using, it's still possible that PHP will "leak" memory, although recent improvements (5.2 introduced a new memory management system and garbage collection was overhauled in 5.3) should hopefully mean that this less of an issue.
In general, it'll also be easier to deal with other issues (if the DB connection temporarily goes down due to a config change and restart for example) if you use the cron approach, although in an ideal world you'd cater for such eventualities in your code anyway. (That said, the last time I checked, this was far from an ideal world.)
First I fail to see how you need a daemon script in order to provide the functionality you describe.
Cron jobs are of course a possibility, but I want the refreshing of data to happen as soon as the previous interval has completed
The neither a cron job nor a daemon are the way to solve the problem (unless the daemon becomes the data sink for the scripts). I'd spawn a dissociated process when the data is available using a locking strategy to aoid concurrency.
Long running PHP scripts are not intrinsically bad - but there reference counting garbage collector does not deal with all possible scenarios for cleaning up memory - but more recent implementations have a more advanced collector which should clean up a lot more (circular reference checker).

crawling scraping and threading? with php

I have a personal web site that crawls and collects MP3s from my favorite music blogs for later listening...
The way it works is a CRON job runs a .php scrip once every minute that crawls the next blog in the DB. The results are put into the DB and then a second .php script crawls the collected links.
The scripts only crawl two levels down into the page so.. main page www.url.com and links on that page www.url.com/post1 www.url.com/post2
My problem is that as I start to get a larger collection of blogs. They are only scanned once ever 20 to 30 minutes and when I add a new blog to to script there is a backup in scanning the links as only one is processed every minute.
Due to how PHP works it seems I cannot just allow the scripts to process more than one or a limited amount of links due to script execution times. Memory limits. Timeouts etc.
Also I cannot run multiple instances of the same script as they will overwrite each other in the DB.
What is the best way I could speed this process up.
Is there a way I can have multiple scripts affecting the DB but write them so they do not overwrite each other but queue the results?
Is there some way to create threading in PHP so that a script can process links at its own pace?
Any ideas?
Thanks.
USE CURL MULTI!
Curl-mutli will let you process the pages in parallel.
http://us3.php.net/curl
Most of the time you are waiting on the websites, doing the db insertions and html parsing is orders of magnitude faster.
You create a list of the blogs you want to scrape,Send them out to curl multi. Wait and then serially process the results of all the calls. You can then do a second pass on the next level down
http://www.developertutorials.com/blog/php/parallel-web-scraping-in-php-curl-multi-functions-375/
pseudo code for running parallel scanners:
start_a_scan(){
//Start mysql transaction (needs InnoDB afaik)
BEGIN
//Get first entry that has timed out and is not being scanned by someone
//(And acquire an exclusive lock on affected rows)
$row = SELECT * FROM scan_targets WHERE being_scanned = false AND \
(scanned_at + 60) < (NOW()+0) ORDER BY scanned_at ASC \
LIMIT 1 FOR UPDATE
//let everyone know we're scanning this one, so they'll keep out
UPDATE scan_targets SET being_scanned = true WHERE id = $row['id']
//Commit transaction
COMMIT
//scan
scan_target($row['url'])
//update entry state to allow it to be scanned in the future again
UPDATE scan_targets SET being_scanned = false, \
scanned_at = NOW() WHERE id = $row['id']
}
You'd probably need a 'cleaner' that checks periodically if there's any aborted scans hanging around too, and reset their state so they can be scanned again.
And then you can have several scan processes running in parallel! Yey!
cheers!
EDIT: I forgot that you need to make the first SELECT with FOR UPDATE. Read more here
This surely isn't the answer to your question but if you're willing to learn python I recommend you look at Scrapy, an open source web crawler/scraper framework which should fill your needs. Again, it's not PHP but Python. It is how ever very distributable etc... I use it myself.
Due to how PHP works it seems I cannot just allow the scripts to process more than one or a limited amount of links due to script execution times. Memory limits. Timeouts etc.
Memory limit is only a problem, if your code leaks memory. You should fix that, rather than raising the memory limit. Script execution time is a security measure, which you can simply disable for your cli-scripts.
Also I cannot run multiple instances of the same script as they will overwrite each other in the DB.
You can construct your application in such a way that instances don't override each other. A typical way to do it would be to partition per site; Eg. start a separate script for each site you want to crawl.
CLI scripts are not limited by max execution times. Memory limits are not normally a problem unless you have large sets of data in memory at any one time. Timeouts should be handle gracefully by your application.
It should be possible to change your code so that you can run several instances at once - you would have to post the script for anyone to advise further though. As Peter says, you probably need to look at the design. Providing the code in a pastebin will help us to help you :)

Categories