PHPList parallel processing - php

Been using PHPlist and im trying to get parallel processing to work but it seems that even if I set batch processing and mailqueue throttling as well as parallel processing only one campaign seems to run even after the batch limit is reached. Heres the config.php file with the mail processing section below. Please note I used the config_extend file to further customise this. I also have a cron job that runs every 5mins to check and process the queue.
=========================================================================
Queue and Load management
=========================================================================
*/
// If you set up your system to send the message automatically (from commandline),
// you can set this value to 0, so "Process Queue" will disappear from the site
// this will also stop users from loading the page on the web frontend, so you will
// have to make sure that you run the queue from the commandline
// check README.commandline how to do this
define('MANUALLY_PROCESS_QUEUE', 1);
// This setting will activate an initial setup choice for processing the queue
// When "true" it will allow a choice between remote queue processing with the
// phpList.com service or processing it locally in the browser.
// when the value is "false", you can use remote processing in your own way
// as explain at https://resources.phplist.com/system/remote_processing
define('SHOW_PQCHOICE',false);
// batch processing
// if you are on a shared host, it will probably be appreciated if you don't send
// out loads of emails in one go. To do this, you can configure batch processing.
// Please note, the following two values can be overridden by your ISP by using
// a server wide configuration. So if you notice these values to be different
// in reality, that may be the case
// max messages to process
// if there are multiple messages in the queue, set a maximum to work on
define('MAX_PROCESS_MESSAGE', 999);
// process parallel
// if there are multiple messages in the queue, divide the max batch across them
// instead of sending them one by one.
// this only works if you use batch processing. It will divide the batch between the
// campaigns that need sending.
define('PROCESSCAMPAIGNS_PARALLEL',true);
// define the amount of emails you want to send per period. If 0, batch processing
// is disabled and messages are sent out as fast as possible
define('MAILQUEUE_BATCH_SIZE', 360);
// define the length of one batch processing period, in seconds (3600 is an hour)
define('MAILQUEUE_BATCH_PERIOD', 3600);
// to avoid overloading the server that sends your email, you can add a little delay
// between messages that will spread the load of sending
// you will need to find a good value for your own server
// value is in seconds, and you can use fractions, eg "0.5" is half a second
// (or you can play with the autothrottle below)
define('MAILQUEUE_THROTTLE', 10);
// Mailqueue autothrottle. This will try to automatically change the delay
// between messages to make sure that the MAILQUEUE_BATCH_SIZE (above) is spread evently over
// MAILQUEUE_BATCH_PERIOD, instead of firing the Batch in the first few minutes of the period
// and then waiting for the next period. This only works with mailqueue_throttle off
// and MAILQUEUE_BATCH_PERIOD being a positive value
// it still needs tweaking, so send your feedback to mantis.phplist.com if you find
// any issues with it
define('MAILQUEUE_AUTOTHROTTLE', 0);
// Domain Throttling
// You can activate domain throttling, by setting USE_DOMAIN_THROTTLE to 1
// define the maximum amount of emails you want to allow sending to any domain and the number
// of seconds for that amount. This will make sure you don't send too many emails to one domain
// which may cause blacklisting. Particularly the big ones are tricky about this.
// it may cause a dramatic increase in the amount of time to send a message, depending on how
// many users you have that have the same domain (eg hotmail.com)
// if too many failures for throttling occur, the send process will automatically add an extra
// delay to try to improve that. The example sends 1 message every 2 minutes.
define('USE_DOMAIN_THROTTLE', 0);
define('DOMAIN_BATCH_SIZE', 1);
define('DOMAIN_BATCH_PERIOD', 120);
// if you have very large numbers of users on the same domains, this may result in the need
// to run processqueue many times, when you use domain throttling. You can also tell phplist
// to simply delay a bit between messages to increase the number of messages sent per queue run
// if you want to use that set this to 1, otherwise simply run the queue many times. A cron
// process every 10 or 15 minutes is recommended.
define('DOMAIN_AUTO_THROTTLE', 0);
// MAX_PROCESSQUEUE_TIME
// to limit the time, regardless of batch processing or other throttling of a single run of "processqueue"
// you can set the MAX_PROCESSQUEUE_TIME in seconds
// if a single queue run exceeds this amount, it will stop, just to pick up from where it left off next time
// this allows multiple installations each to run the queue, but slow installations (eg with large emails)
// set to 0 to disable this feature.
define('MAX_PROCESSQUEUE_TIME', 0);
/*
My goal is to have 2 campaigns sending out mail. I know it cant send both campaigns out at once unless I have 2 installs but even after sending the max batch it doesnt switch to the second campaign. Doesnt anyone have experience in this?
Thanks

After much tinkering and trying to figure out why emails were being sent out so slow came down to a bandwidth problem. I thought it was something to do with PHPList and the scripts but after checking with our hosting provider I can confirm they have been having bandwith issues. So all fixed and working now

Related

How do I observe a rate Limit in PHP?

So, I'm requesting data from an API.
So far, my API key is limited to:
10 requests every 10 seconds
500 requests every 10 minutes
Bascially, I want to request a specific value from every game the user has played.
That are, for example, about 300 games.
So I have to make 300 requests with my PHP. How can I slow them down to observe the rate limit?
(It can take time, site does not have to be fast)
I tried sleep(), which resulted in my script crashing.. Any other ways to do this?
I suggest setting up a cron job that executes every minute, or even better use Laravel scheduling rather than using sleep or usleep to imitate a cron.
Here is some information on both:
https://laravel.com/docs/5.1/scheduling
http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
This sounds like a perfect use for the set_time_limit() function. This function allows you to specify how long your script can execute, in seconds. For example, if you say set_time_limit(45); at the beginning of your script, then the script will run for a total of 45 seconds. One great feature of this function is that you can allow your script to execute indefinitely (no time limit) by saying: set_time_limit(0);.
You may want to write your script using the following general structure:
<?php
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);
// Define constant for how much time must pass between batches of connections:
define('TIME_LIMIT', 10); // Seconds between batches of API requests
$tLast = 0;
while( /* Some condition to check if there are still API connections that need to be made */ ){
if( timestamp() <= ($tLast + TIME_LIMIT) ){ // Check if TIME_LIMIT seconds have passed since the last connection batch
// TIME_LIMIT seconds have passed since the last batch of connections
/* Use cURL multi to make 10 asynchronous connections to the API */
// Once all of those connections are made and processed, save the current time:
$tLast = timestamp();
}else{
// TIME_LIMIT seconds have not yet passed
// Calculate the total number of seconds remaining until TIME_LIMIT seconds have passed:
$timeDifference = $tLast + TIME_LIMIT - timestamp();
sleep( $timeDifference ); // Sleep for the calculated number of seconds
}
} // END WHILE-LOOP
/* Do any additional processing, computing, and output */
?>
Note: In this code snippet, I am also using the ignore_user_abort() function. As noted in the comment on the code, this function just allows the script to ignore a user abort, so if the user closes the browser (or connection) while your script is still executing, the script will continue retrieving and processing the data from the API anyway. You may want to disable that in your implementation, but I will leave that up to you.
Obviously this code is very incomplete, but it should give you a decent understanding of how you could possibly implement a solution for this problem.
Don't slow the individual requests down.
Instead, you'd typically use something like Redis to keep track of requests per-IP or per-user. Once the limit is hit for a time period, reject (with a HTTP 429 status code, perhaps) until the count resets.
http://redis.io/commands/INCR coupled with http://redis.io/commands/expire would easily do the trick.

Deleting Running Jobs from Beanstalkd Queue in laravel

Here is the scenario, I'm using Beanstalkd queue to send an email to a huge list of emails (50000+), each email has to have some unique content so the fired job loops over all the addresses, generates the content and sends the mail.
Sometimes the user may want to cancel the operation in the middle of sending, so for example while the job is running and after the mail was sent to say 20000 addresses, the user clicks on "Stop" which should "delete" the job.
what I have done so far is that I managed to get the running job instance, Queue::Push returns the job ID, so I save this ID saved in DB, and when I want to stop the job this is what I tried to do
$phean= Queue::getPheanstalk();
$res = $phean->peek($Job_ID); // returns a Pheanstalk_Job
$job = new \Illuminate\Queue\Jobs\BeanstalkdJob(app() , $phean, $res , 'default') ;
$res = $job->delete() // returns NOT_FOUND ??
$data = $job->getRawBody() // returns correct data, so I'm sure this is the right job instance
so why do I get NOT_FOUND, although when I use
supervisorctl tail -f queuename
I can see that the job is still running and outputting content
Any help ? If there is a better approach than trying to get the job and delete it this way I'm open to suggestions, I thought about saving the job ID in database (ID , status), and when I want to delete it I alter the ID status, and In the loop that is running within the job it checks every time, or maybe every 10 times, and if the status is equal to 1 for example then $job->delete(), but this will be so slow as it will hit the db in every loop.
So you have a a main job, which you reserve() and hold open, and in that job you create many emails directly.
Since the job you are trying to delete is currently reserved, you can't delete it. Even if you could, how would the currently running job be informed by Beanstalkd?
Instead, I would have the main loop check for any jobs on some separate control tube (you could do a quick check every say 10 or 100 emails sent) - just to request a new job, but not waiting if there isn't anything there. If there is a job there, then the main process cleans up and exits.
Another idea is not to actually send email in the main loop, but instead put the details of what emails to send, one per address, into the queue. Other processes read that mass-queue and start sending emails, but again, also read a control tube (with a higher priority message that would be returned ahead of the lower-priority email/details message). If there's anything in the control tube, stop sending emails. You would need at least as many STOP messages in the control tube as you have workers.

Endless loop in cron job php

I have to send bulk emails to the users. I think of having an endless loop in a cron job, where I want to fetch a few dozens or hundreds users and send emails one by one - updating the the table, that the email was sent. And also I should put some sleep interval, as soon as each packet of dozen(or hundred) users received the email. Basically it looks like
while(1 != 0) {
$notifications = // fetch notifications, where email is not sent
foreach($notifications as $notification) {
// 1) send email
// 2) update table - email was sent
}
sleep(5);
}
Now, is this all right to use, or it is considered a bad practice ?
I know, I can also use multiple crons, lets say every one minute, but to prevent overlapping when using lock file, as soon as the cron starts and the lock file exists(so another cron is still running) it should either
a) wait for some time to the first cron to finish, to start,
or
b) just return empty, allowing the next cron to do the job ASA the ongoing one is done.
The problem with a) is that, what if the crons take lot more time than expected, then after some time I will have bunch of crons in a "waiting" state. About the b) case, what if immediately after the second cron is done(returning empty), the first cron ends, so I will have a gap of ~ one minute, and I need to send emails to users as soon as possible.
also, qsn 2, what is better in performance wise, one cron in loop vs multiple crons?
Thanks
What you are describing a daemon, not a cron task.
There are lots of daemons that run continuously, so no, it's not a bad practice to do that.
If you want the daemon automatically restarted if it crashes, you could have a watchdog task, which continuously checks that the daemon is running, and starts a daemon process if one isn't running.
Another alternative (as you describe) is to have crontask that occasionally attempts to start the daemon; the startup should detect whether the daemon process is already running. If it's already running, leave it be, and just exit. If it's not running, then start another one (in the background, as a detached process. Either way, the crontask completes quickly.
(And it doesn't matter one whit whether the daemon connects to MySQL.)
Personally, I dislike endless loops. I prefer a cron job running every 5 minutes for example.
And you can optimize your script for send max emails quantity in cron job time.
You need to estimate how many emails you will send per minute. I'll assume 1 email per second.
So my idea is:
Query for 290 notifications [10 seconds delay to get and update notifications] and mark them as "sending" status (to prevent next cron dont pick them).
Send emails and save result in array (for later update).
When finished, update notifications status (sent or error).
Just my 2 cents.

Throttling PHPmailer for use in Elgg

I'll be using the social networking software, Elgg, for an organization that needs to send mass emails to specific groups when they need to. The number of emails can range from 10-1000 depending on the group. Web host only allows 500 emails per hour, so I need to throttle the script to send one email every 8 seconds.
I'm using PHPmailer with Elgg. PHPmailer says that I should use these two scripts (code below) in conjunction with each other in order to throttle the mailing. I know how I'm going to use the code in the mailing script, I'm just unsure about a couple things.
1) I don't really understand the purpose for the safemode
2) After looking up set_time_limit, it looks like I should set this to an amount of time to allow all potential emails to be sent, whether it's 10 or 1000? Or is this a max of 30 seconds per loop in case it needs to timeout?
3) How should I set this to get what I need?
Links to PHPmailer describing code:
http://phpmailer.worxware.com/index.php?pg=tip_ext
http://phpmailer.worxware.com/index.php?pg=tip_pause
<?php
/* The following code snippet with set the maximum execution time
* of your script to 300 seconds (5 minutes)
* Note: set_time_limit() does not work with safe_mode enabled
*/
$safeMode = ( #ini_get("safe_mode") == 'On' || #ini_get("safe_mode") === 1 ) ? TRUE : FALSE;
if ( $safeMode === FALSE ) {
set_time_limit(300); // Sets maximum execution time to 5 minutes (300 seconds)
// ini_set("max_execution_time", "300"); // this does the same as "set_time_limit(300)"
}
echo "max_execution_time " . ini_get('max_execution_time') . "<br>";
/* if you are using a loop to execute your mailing list (example: from a database),
* put the command in the loop
*/
while (1==1) {
set_time_limit(30); // sets (or resets) maximum execution time to 30 seconds)
// .... put code to process in here
if (1!=1) {
break;
}
}
?>
and
<?php
/* Note: set_time_limit() does not work with safe_mode enabled */
while (1==1) {
set_time_limit(30); // sets (or resets) maximum execution time to 30 seconds)
// .... put code to process in here
usleep(1000000); // sleep for 1 million micro seconds - will not work with Windows servers / PHP4
// sleep(1); // sleep for 1 seconds (use with Windows servers / PHP4
if (1!=1) {
break;
}
}
?>
Safe mode is deprecated as of php 5.3 and removed in php 5.4, so if your install is relatively recent, it's a moot point: http://php.net/manual/en/ini.sect.safe-mode.php#ini.safe-mode
Doing a set_time_limit() will reset the counter, so as long as your code reaches the set_time_limit() call in less time than the limit was set previously (e.g. gets there in 29 seconds, leaving 1 second on clock), the code will reset the timer and get another 30 seconds. However, since you don't want your code to be racy, you should simply disable the time limit entirely.
Personally, I wouldn't dump out one email every 8 seconds. I'd blast out the 500 we're allowed, then have a scheduled job to fire up the script once an hour and resume from where the blast left off. This will make things be a bit bursty for the mail server, but potentially more efficient in the long run, as it could batch together emails for the same recipient domains. e.g. all #aol.com mails in the group of 500 can go together, rather than forcing the server to connect to aol multiple times to deliver individual mails.
As well, if you're batching like this, a server failure will only be 'bad' during the few seconds when the script's actually running and building emails. The rest of the time the PHP script won't even be running, and it'll be up to the smtp server to do its thing.
I might not be of quick and perticular help but i would consider an asynchronous approach.
This involves storing the task to send an email in a queue and having workers which process those tasks.
The simplest way is to just store emails in the database and have a cronjob running on the server which sends the emails in batches.
The better (but more complex) solution would be to use some sort of message queue system, like zeromq or the heavy-weight rabbitmq.
The last and maybe most comfortable option from the top of my head is to use a web service like MailChimp or Postmark.

Processing large amounts of data in PHP without a browser timeout

I have array of mobile numbers, around 50,000. I'm trying to process and send bulk SMS to these numbers using third-party API, but the browser will freeze for some minutes. I'm looking for a better option.
Processing of the data involves checking mobile number type (e.g CDMA), assigning unique ids to all the numbers for further referencing, check for network/country unique charges, etc.
I thought of queuing the data in the database and using cron to send about 5k by batch every minute, but that will take time if there are many messages. What are my other options?
I'm using Codeigniter 2 on XAMPP server.
I would write two scripts:
File index.php:
<iframe src="job.php" frameborder="0" scrolling="no" width="1" height="1"></iframe>
<script type="text/javascript">
function progress(percent){
document.getElementById('done').innerHTML=percent+'%';
}
</script><div id="done">0%</div>
File job.php:
set_time_limit(0); // ignore php timeout
ignore_user_abort(true); // keep on going even if user pulls the plug*
while(ob_get_level())ob_end_clean(); // remove output buffers
ob_implicit_flush(true); // output stuff directly
// * This absolutely depends on whether you want the user to stop the process
// or not. For example: You might create a stop button in index.php like so:
// Stop!
// Start
// But of course, you will need that line of code commented out for this feature to work.
function progress($percent){
echo '<script type="text/javascript">parent.progress('.$percent.');</script>';
}
$total=count($mobiles);
echo '<!DOCTYPE html><html><head></head><body>'; // webkit hotfix
foreach($mobiles as $i=>$mobile){
// send sms
progress($i/$total*100);
}
progress(100);
echo '</body></html>'; // webkit hotfix
I'm assuming these numbers are in a database, if so you should add a new column titled isSent (or whatever you fancy).
This next paragraph you typed should be queued and possibly done night/weekly/whenever appropriate. Unless you have a specific reason too, it shouldn't be done in bulk on demand. You can even add a column to the db to see when it was last checked so that if a number hasn't been checked in at least X days then you can perform a check on that number on demand.
Processing of the data involves checking mobile number type (e.g CDMA), assigning unique ids to all the numbers for further referencing, check for network/country unique charges, etc.
But that still leads you back to the same question of how to do this for 50,000 numbers at once. Since you mentioned cron jobs, I'm assuming you have SSH access to your server which means you don't need a browser. These cron jobs can be executed via the command line as such:
/usr/bin/php /home/username/example.com/myscript.php
My recommendation is to process 1,000 numbers at a time every 10 minutes via cron and to time how long this takes, then save it to a DB. Since you're using a cron job, it doesn't seem like these are time-sensitive SMS messages so they can be spread out. Once you know how long it took for this script to run 50 times (50*1000 = 50k) then you can update your cron job to run more/less frequently.
$time_start = microtime(true);
set_time_limit(0);
function doSendSMS($phoneNum, $msg, $blah);
$time_end = microtime(true);
$time = $time_end - $time_start;
saveTimeRequiredToSendMessagesInDB($time);
Also, you might have noticed a set_time_limit(0), this will tell PHP to not timeout after the default 30seconds. If you are able to modify the PHP.ini file then you don't need to enter this line of code. Even if you are able to edit the PHP.ini file, I would still recommend not changing this feature since you might want other pages to time out.
http://php.net/manual/en/function.set-time-limit.php
If this isn't a one-off type of situation, consider engineering a better solution.
What you basically want is a queue that your browser-bound process can write to, and than 1-N worker processes can read from and update.
Putting work in the queue should be rather inexpensive - perhaps a bunch of simple INSERT statements to a SQL RDBMS.
Then you can have a daemon or two (or 100, distributed across multiple servers) that read from the queue and process stuff. You'll want to be careful here and avoid two workers taking on the same task, but that's not hard to code around.
So your browser-bound workflow is: click some button that causes a bunch of stuff to get added to the queue, then redirect to some "queue status" interface, where the user can watch the system chew through all their work.
A system like this is nice, because it's easy to scale horizontally quite a ways.
EDIT: Christian Sciberras' answer is going in this direction, except the browser ends up driving both sides (it adds to the queue, then drives the worker process)
Cronjob would be your best bet, I don't see why it would take any longer than doing it in the browser if your only problem at the moment is the browser timing out.
If you insist on doing it via the browser then the other solution would be doing it in batches of say 1000 and redirecting to the same script but with some reference to where it got up to last time in a $_GET variable.

Categories