I am trying to send a notification when a certain event is about to happen in one hour.
For testing purposes I currently am having the cron process run once a minute. But I suspect there is a more efficient way to go about this.
I am trying to avoid keeping track of notifications, and so I am trying to just build in some logic so that I can get one trigger on the notification.
Here is my current process:
function webinar_starts_onehour() {
// Get all lessons that are in future
$today = time();
$args = array(
'post_type' => 'lessons',
'post_status' => 'publish',
'posts_per_page' => -1,
'meta_query' => array(
'key' => 'webinar_time',
'value' => $today,
'compare' => '>='
)
);
$lessons = get_posts( $args );
$notifications = get_notifications( 'webinar_starts_onehour' );
// Foreach lesson
foreach ( $lessons as $lesson ) {
$webinar_time = strtotime($lesson->webinar_time);
$difference = round(($webinar_time - $today) / 3600,2);
if (($difference > .98) && ($difference < 1.017)) {
// do something
}
}
}
So what I am trying to do is have it trigger if it is a little less than an hour away or a little more (that is +- one minute).
I suspect my condition can be set twice in some situations and so trying to figure out a more solid way to make sure with a cron that fires every minute, that this condition would be triggered only once.
Ideas?
And if you think this is really unreliable (which I know it is) what would be a pragmatic way to add a table to track this sort of notification? Would I just create a table, say, sent_notifications which would have user_id, notification_id, lesson_id, status
and then check if there was a successful notification for this particular lesson and for failed sends use another cron to continuously try sending the failed ones?
thanks,
Brian
Related
I'm working inside a Cake PHP 2 web application, I have a database table called jobs where data is stored, I have a Console command which runs on a cron every minute and when it runs it grabs data from my jobs table in a function called getJobsByQueuePriority and then does something.
The issue I'm facing is that I have multiple cron jobs that need to be ran every minute and need to run at the same time, when they run, they're both grabbing the same sets of data from the database table, how can I prevent this and ensure that if a column was already retrieved by one cron, the other cron picks a different row?
I Initially tried adding 'lock' => true to my queries as per the docs, but this isn't achieving the result I need as when logging data to a file both running crons are pulling the same database entry ID's.
I then tried using transactions, I put a begin before the queries and a commit afterwards, maybe this is what I need to use but am using it slightly wrong?
The function which performs the required query with my attempt of transactions is:
/**
* Get queues in order of priority
*/
public function getJobsByQueuePriority($maxWorkers = 0)
{
$jobs = [];
$queues = explode(',', $this->param('queue'));
// how many queues have been set for processing?
$queueCount = count($queues);
$this->QueueManagerJob = ClassRegistry::init('QueueManagerJob');
$this->QueueManagerJob->begin();
// let's first figure out how many jobs are in each of our queues,
// this is so that if a queue has no jobs then we can reassign
// how many jobs can be allocated based on our maximum worker
// count.
foreach ($queues as $queue) {
// count jobs in this queue
$jobCountInQueue = $this->QueueManagerJob->find('count', array(
'conditions' => array(
'QueueManagerJob.reserved_at' => null,
'QueueManagerJob.queue' => $queue
)
));
// if there's no jobs in the queue, subtract a queue
// from our queue count.
if ($jobCountInQueue <= 0) {
$queueCount = $queueCount - 1;
}
}
// just in case we end up on zero.
if ($queueCount <= 0) {
$queueCount = 1;
}
// the amount of jobs we should grab
$limit = round($maxWorkers / $queueCount);
// now let's get all of the jobs in each queue with our
// queue count limit.
foreach ($queues as $queue) {
$job = $this->QueueManagerJob->find('all', array(
'conditions' => array(
'QueueManagerJob.reserved_at' => null,
'QueueManagerJob.queue' => $queue
),
'order' => array(
'QueueManagerJob.available_at' => 'desc'
),
'limit' => $limit
));
// if there's no job for this queue
// skip to the next so that we don't add
// an empty item to our jobs array.
if (!$job) {
continue;
}
// add the job to the list of jobs
array_push($jobs, $job);
}
$this->QueueManagerJob->commit();
// return the jobs
return $jobs[0];
}
What am I missing or is there a small change I need to tweak in my function to prevent multiple crons picking the same entries?
Currently, I have the following problem:
I have created a WordPress environment that sends personalized emails to subscribers based on their preferences. This has worked for quite some time but for a couple of months, we are experiencing some inconsistencies. These inconsistencies are as followed:
Once in a while, the foreach loop for sending the emails stops in the middle of its execution. For example, we have a newsletter with 4000 subscribers. Once in a while, the program randomly stops its sending procedure at around 2500 emails. When this happens, there are literally no signs of any errors and there is also nothing to be seen in the debug log.
I have tried the following things to fix the issue:
Different sender; we switched from Sendgrid to SMTPeter (Dutch SMTP service)
Delays; we have tried whether placing a wait after x number of emails would have any impact because there might be too many requests per minute, but this was not the case.
Disable plugins; For 5 weeks we thought we had found the problem. WordFence seemed to be the problem, unfortunately, the send function stopped again last week and this did not appear to be causing the problems. Just to show how unstable it really is. It can go well for 5 weeks and then not for 2 weeks.
Rewriting of functions
Logging, we write values to a txt file after every important step to keep track of where the send function stops. This is just to see which users have received an email and which still need to receive it so that we can continue sending it from there.
Debug log, the annoying thing is that even when we have the wp_debug on, nothing comes up that indicates a cause of crashing.
To schedule the sender I use the WP_Cron to run the task in the background. From there the following function is triggered;
Below, the code I wrote in stripped format. I removed all the $message additions as this is just HTML with some variables of ACF for the email. I translated it so it's easier to understand.
<?php
function send_email($edition_id, $post)
{
require_once('SMTPeter.php'); //Init SMTPeter Sender
$myfile = fopen("log.txt", "a") or die("Unable to open file!"); //Open custom logfile
$editionmeta = get_post_meta($edition_id); //Get data of edition
$users = get_users();
$args = array(
'post_type' => 'articles',
'post_status' => 'publish',
'posts_per_page' => -1,
'order' => 'asc',
'meta_key' => 'position',
'orderby' => 'meta_value_num',
'meta_query' => array(
array(
'key' => 'edition_id',
'value' => $edition_id,
'compare' => 'LIKE',
),
),
);
$all_articles = new WP_Query($args); // Get all articles of edition
$i = 0; // Counter users interrested in topic
$j = 0; // Counter sent emails
foreach ($users as $user) { //Loop over all users <---- This is the loop that not always finishes all itterations
$topic_ids = get_field('topicselect_', 'user_' . $user->ID);
$topic_id = $editionmeta['topic_id'][0];
if (in_array($editionmeta['topic_id'][0], $topic_ids)) { // Check if user is interrested in topic.
$i++; // Counter interrested in topic +1.
// Header info
$headerid = $editionmeta['header_id'][0];
$headerimage = get_field('header_image', $headerid);
$headerimagesmall = get_field('header_image_small', $headerid);
// Footer info
$footerid = $editionmeta['footer_id'][0];
$footer1 = get_field('footerblock_1', $footerid);
$footer2 = get_field('footerblock_2', $footerid);
$footer3 = get_field('footerblock_3', $footerid);
$message = '*HTML header newsletter*'; // First piece of content email
if ($all_articles->have_posts()) :
$articlecount = 0; // Set article count to check for empty newsletters
while ($all_articles->have_posts()) : $all_articles->the_post();
global $post;
$art_categories = get_the_category($post->ID); // Get categories of article
$user_categories = get_field('user_categories_', 'user_' . $user->ID); // Get categories user is interrested in
$user_cats = array();
foreach ($user_categories as $user_category) {
$user_cats[] = $user_category->name; // right format for comparison
}
$art_cats = array();
foreach ($art_categories as $art_category) {
$art_cats[] = $art_category->name; // right format for comparison
}
$catcheck = array_intersect($user_cats, $art_cats); // Check if 1 of the article's categories matches one of a user's categories
if (count($catcheck) > 0) { // The moment the array intersect count is greater than 0 (at least 1 category matches), the article is added to the newsletter.
$message .= "*Content of article*"; // Append article to content of newsletter
$articlecount++;
}
endwhile;
endif;
if ($articlecount > 0) { //As soon as the newsletter contains at least 1 article, it will be sent.
$j++; //Sent email counter.
$mailtitle = $editionmeta['mail_subject'][0]; // Title of the email
$sender = new SMTPeter("*API Key*"); // Class SMTPeter sender
$output = $sender->post("send", array(
'recipients' => $user->user_email, // The receiving email address
'subject' => $mailtitle, // MIME's subject
'from' => "*Sender*", // MIME's sending email address
'html' => $message,
'replyto' => "*Reply To*",
'trackclicks' => true,
'trackopens' => true,
'trackbounces' => true,
'tags' => array("$edition_id")
));
error_log(print_r($output, TRUE));
fwrite($myfile, print_r($output, true));
}
}
}
fclose($myfile);
}
All I want to know is the following;
Why can't my code run the foreach completely, every time? I mean, it's quite frustrating to see that it sometimes works like a charm, and the next time it could get stuck again.
Some things I thought about but did not yet implement:
Rewrite parts of the function into separate functions. Retrieving the content and setting up the HTML for the newsletter could be done in a different function. Besides the fact that it would obviously be an improvement for cleaner code, I just wonder if this could actually be the problem.
Can a foreach crash due to a fwrite trying to write to a file that is already being written to? So does our log cause the function to not run properly? (Concurrency, but is this a thing in PHP with its workers?)
Could the entire sending process be written in a different way?
Thanks in advance,
Really looking forward to your feedback and findings
My code is based off the sample mentioned on this page:
use Google\Cloud\Logging\LoggingClient;
$filter = sprintf(
'resource.type="gae_app" severity="%s" logName="%s"',
strtoupper($level),
sprintf('projects/%s/logs/app', 'MY_PROJECT_ID'),
);
$logOptions = [
'pageSize' => 20,
'resultLimit' => 20,
'filter' => $filter,
];
$logging = new LoggingClient();
$logs = $logging->entries($logOptions);
foreach ($logs as $log) {
/* Do something with the logs */
}
This code is (at best) slow to complete, and (at worst) times out on the foreach loop with a DEADLINE_EXCEEDED error.
How can I fix this?
If your query does not match the first few logs it finds, Cloud Logging will attempt to search your entire logging history for the matching logs.
If there are too many logs to filter through, the search will time out with a DEADLINE_EXCEEDED message.
You can fix this by specifying a time frame to search from in your filter clause:
// Specify a time frame to search (e.g. last 5 minutes)
$fiveMinAgo = date(\DateTime::RFC3339, strtotime('-5 minutes'));
// Add the time frame constraint to the filter clause
$filter = sprintf(
'resource.type="gae_app" severity="%s" logName="%s" timestamp>="%s"',
strtoupper($level),
sprintf('projects/%s/logs/app', 'MY_PROJECT_ID'),
$fiveMinAgo
);
I'm working on a project that connects to an external API. I have already made the connection and I've implemented several functions to retrieve the data, that's all working fine.
The following function however, works exactly like it should, only it slows down my website significantly ( 25 seconds + ).
Is this because of the nested foreach loop? And what can i do to refactor the code?
/**
* #param $acti
*/
function getBijeenkomstenFromAct ($acti) {
$acties = array();
foreach ($acti as $act) {
$bijeenkomsten = $this->getBijeenkomstenFromID($act['id']);
if (in_array('Doorlopende activiteit', $act['type'])) {
foreach ($bijeenkomsten as $bijeenkomst) {
$acties[] = array(
'id' => $act['id'],
'Naam' => $act['titel'],
'interval' => $act['interval'],
'activiteit' => $bijeenkomst['activiteit'],
'datum' => $bijeenkomst['datum']
);
}
} else {
$acties[] = array (
'id' => $act['id'],
'type' => $act['type'],
'activiteit' => $act['titel'],
'interval' => $act['interval'],
'dag' => $act['dag'],
'starttijd' => $act['starttijd'],
'eindtijd' => $act['eindtijd']
);
}
}
return $acties;
}
The function "getBijeenkomstenfromID" is working fine and on it's own not slow at all. Just to be sure, here is the function:
/**
* #param $activitieitID
*
* #return mixed
*
*/
public function getBijeenkomstenFromID($activitieitID) {
$options = array(
//'datumVan' => date('Y-m-d'),
'activiteit' => array (
'activiteit' => $activitieitID
),
'limit' => 5,
'datumVan' => date(("Y-m-d"))
);
$bijeenkomsten = $this->webservice->activiteitBijeenkomstOverzicht($this->api_key, $options);
return $bijeenkomsten;
}
It looks like you're calling on the API from within the first foreach loop, which is not efficient.
Every time you do this:
$bijeenkomsten = $this->getBijeenkomstenFromID($act['id']);
you're adding a lot of "dead" time to your script since you have to put on with network latency, the time you need to allow for the API to actually do the work and transmit it back to you. Even though this may be quick (let's say 100ms total), if your first foreach loop iterates 100 times, you already have accumulated 10 seconds of waiting, and that's before getBijeenkomstenFromAct ($acti) has done any real processing.
The best practice here would be to split this if possible. My suggestion:
Make getBijeenkomstenFromID($activitieitID) run asynchronously on its own for all the IDs you need to lookup in the API. The key here is for it to run as a separate process and then have it pass the array it constructs to getBijeenkomstenFromAct so that it can loop and process it happily.
So yes, basically I'm suggestion that you orchestrate your process backwards for efficiency's sake
Look into curl_multi: http://php.net/manual/en/function.curl-multi-exec.php
It will let you call an external API asynchronously and process the returns all at once. Be aware that APIs often have their own limitations on asynchronous calls, and common sense dictates that you probably shouldn't be hammering a website with 200 separate calls. But if your number of calls is under a dozen or two (and the API allows it), curl_multi will do nicely.
I am working on Facebook ads api to get the account Campaign data.What I am doing here is I get list of all campaigns and doing forloop of each campaign get Campaign stat
$campaignSets = $account->getCampaigns(array(
CampaignFields::ID,
CampaignFields::NAME
));
foreach ($campaignSets as $campaign) {
$campaign = new Campaign($campaign->id);
$fields = array(
InsightsFields::CAMPAIGN_NAME,
InsightsFields::IMPRESSIONS,
InsightsFields::UNIQUE_CLICKS,
InsightsFields::REACH,
InsightsFields::SPEND,
InsightsFields::TOTAL_ACTIONS,
InsightsFields::TOTAL_ACTION_VALUE
);
$params = array(
'date_preset' => InsightsPresets::TODAY
);
$insights = $campaign->getInsights($fields, $params);
}
when executing above code I am getting error as (#17) User request limit reached.
Can anyone help me how to solve this kind of error?
Thanks,
Ronak Shah
You should consider generating a single report against the adaccount which returns insights for all of your campaigns, this should reduce the number of requests required significantly.
Cursor::setDefaultUseImplicitFetch(true);
$account = new AdAccount($account_id);
$fields = array(
InsightsFields::CAMPAIGN_NAME,
InsightsFields::CAMPAIGN_ID,
InsightsFields::IMPRESSIONS,
InsightsFields::UNIQUE_CLICKS,
InsightsFields::REACH,
InsightsFields::SPEND,
InsightsFields::TOTAL_ACTIONS,
InsightsFields::TOTAL_ACTION_VALUE,
);
$params = array(
'date_preset' => InsightsPresets::TODAY,
'level' => 'ad',
'limit' => 1000,
);
$insights = $account->getInsights($fields, $params);
foreach($insights as $i) {
echo $i->campaign_id.PHP_EOL;
}
If you run into API limits, your only option is to reduce calls. You can do this easily by delaying API calls. I assume you are already using a Cron Job, so implement a counter that stores the last campaign you have requested the data for. When the Cron Job runs again, request the data of the next 1-x campaign data (you have to test how many are possible per Cron Job call) and store the last one again.
Also, you should batch the API calls - it will not avoid limits, but it will be a lot faster. As fast as the slowest API call in the batch.
Add this to your code and you'll never have to worry about FB's Rate Limiting/User Limit Reached.
Your script will automatically sleep as soon as you approach the limit, and then pick up from where it left after the cool down. Enjoy :)
import logging
import requests as rq
#Function to find the string between two strings or characters
def find_between( s, first, last ):
try:
start = s.index( first ) + len( first )
end = s.index( last, start )
return s[start:end]
except ValueError:
return ""
#Function to check how close you are to the FB Rate Limit
def check_limit():
check=rq.get('https://graph.facebook.com/v3.3/act_'+account_number+'/insights?access_token='+my_access_token)
call=float(find_between(check.headers['x-business-use-case-usage'],'call_count":','}'))
cpu=float(find_between(check.headers['x-business-use-case-usage'],'total_cputime":','}'))
total=float(find_between(check.headers['x-business-use-case-usage'],'total_time":',','))
usage=max(call,cpu,total)
return usage
#Check if you reached 75% of the limit, if yes then back-off for 5 minutes (put this chunk in your loop, every 200-500 iterations)
if (check_limit()>75):
print('75% Rate Limit Reached. Cooling Time 5 Minutes.')
logging.debug('75% Rate Limit Reached. Cooling Time 5 Minutes.')
time.sleep(300)