How can I prevent PHP from executing code multiple times? - php

I have a WordPress plugin with a backup script that executes on a schedule. The catch is, if someone hits the page multiple times in succession it can execute the backup script multiple times. Any thoughts on how to prevent multiple executions?
global $bwpsoptions;
if ( get_transient( 'bit51_bwps_backup' ) === false ) {
set_transient( 'bit51_bwps_backup', '1', 300 );
if ( $bwpsoptions['backup_enabled'] == 1 ) {
$nextbackup = $bwpsoptions['backup_next']; //get next schedule
$lastbackup = $bwpsoptions['backup_last']; //get last backup
switch ( $bwpsoptions['backup_interval'] ) { //schedule backup at appropriate time
case '0':
$next = 60 * 60 * $bwpsoptions['backup_time'];
break;
case '1':
$next = 60 * 60 * 24 * $bwpsoptions['backup_time'];
break;
case '2':
$next = 60 * 60 * 24 * 7 * $bwpsoptions['backup_time'];
break;
}
if ( ( $lastbackup == '' || $nextbackup < time() ) && get_transient( 'bit51_bwps_backup' ) === false ) {
$bwpsoptions['backup_last'] = time();
if ( $lastbackup == '' ) {
$bwpsoptions['backup_next'] = ( time() + $next );
} else {
$bwpsoptions['backup_next'] = ( $lastbackup + $next );
}
update_option( $this->primarysettings, $bwpsoptions );
$this->execute_backup(); //execute backup
}
}
}

Create a file at the start of the code.
When the code finishes running delete the file.
At the beginning of the code make sure thefile doesn't exist before running.
Sort of like the apt-get lock in linux.

If your site is very busy and basic locking mechanism arn't working (I personally can't imagine that but oh well!), you can try the solution from PHP session's garbage collector.
Just randomly choose a number between 0 and 10 and if the number is 0, do the backup. If now 10 user's call your backup script at nearly the same time, statistically only one will actually execute the backup.
define("BACKUP_PROBABILITY", 10);
if (mt_rand(0, BACKUP_PROBABILITY) == 0)
doBackup();
You can increase the maximum (the 10) if your site is very highly frequented.
If in those 10 visits none got the 0, the next 10 visitors will get their chance.
You will need of course some kind of locking mechanism and it is still possible (though unplausible) that you will end up with more than one or even 10 backups.
I found this question about mutexes (locks) in PHP. Might be helpful: PHP mutual exclusion (mutex)

Store the last backup date/time in some external file on server or into database, and use a check against that value!

I assume that this backup thing makes a backup somewhere.
So check the metadata on the latest backup, and if it's creation time is not far enough in the past, don't do the backup.
I assume there's a good reason why this isn't a cron job?

Related

Updating php script one time per day

I am making a Covid-19 statistics website - https://e-server24.eu/ . Every time somebody is entering the website, the PHP script is decoding JSON from 3 urls and storing data into some variables.
I want to make my website more optimized so my question is: Is there any script that can update the variables data one time per day, not every time someone accesses the website?
Thanks,
I suggest looking into memory object caching.
Many high-performance PHP web apps use caching extensions (e.g. Memcached, APCu, WinCache), accelerators (e.g. APC, varnish) and caching DBs like Redis. The setup can be a bit involved but you can get started with a simple role-your-own solution (inspired by this):
<?php
function cache_set($key, $val) {
$val = var_export($val, true);
// HHVM fails at __set_state, so just use object cast for now
$val = str_replace('stdClass::__set_state', '(object)', $val);
// Write to temp file first to ensure atomicity
$tmp = sys_get_temp_dir()."/$key." . uniqid('', true) . '.tmp';
file_put_contents($tmp, '<?php $val = ' . $val . ';', LOCK_EX);
rename($tmp, sys_get_temp_dir()."/$key");
}
function cache_get($key) {
//echo sys_get_temp_dir()."/$key";
#include sys_get_temp_dir()."/$key";
return isset($val) ? $val : false;
}
$ttl_hours = 24;
$now = new DateTime();
// Get results from cache if possible. Otherwise, retrieve it.
$data = cache_get('my_key');
$last_change = cache_get('my_key_last_mod');
if ($data === false || $last_change === false || $now->diff($last_change)->h >= $ttl_hours ) { // cached? h: Number of hours.
// expensive call to get the actual data; we simple create an object to demonstrate the concept
$myObj = new stdClass();
$myObj->name = "John";
$myObj->age = 30;
$myObj->city = "New York";
$data = json_encode($myObj);
// Add to user cache
cache_set('my_key', $data);
$last_change = new DateTime(); //now
// Add timestamp to user cache
cache_set('my_key_last_mod', $last_change);
}
echo $data;
Voila.
Furthermore; you could look into client-side caching and many other things. But this should give you an idea.
PS: Most memory cache systems allow to define a time-to-live (TTL) which makes this more concise. But I wanted to keep this example dependency-free. Cache cleaning was omitted here. Simply delete the temp file.
Simple way to do that
Create a script which will fetch , decode JSON data and store it to your database.
Then set a Cron jobs with time laps of 24 hours .
And when user visit your site fetch the data from your database instead of your api provider.

php: time limit exceeded `Success' # fatal/cache.c/GetImagePixelCache/2042

I'm import data from a CRM Server by JSON to Wordpress.
I know that the load may take several minutes, so the script runs outside Wordpress. And I execute "php load_data.php"
But when the script reaches the part where we upload the images, it throws an error:
php: time limit exceeded `Success' # fatal/cache.c/GetImagePixelCache/2042.
and it stops.
This is my code to upload image to media:
<?php
function upload_image_to_media($postid, $image_url, $set_featured=0) {
$tmp = download_url( $image_url );
// fix filename for query strings
preg_match( '/[^\?]+\.(jpg|jpe|jpeg|gif|png)/i', $image_url, $matches );
$before_name = $postid == 0 ? 'upload' : $postid;
$file_array = array(
'name' => $before_name . '_' . basename( $matches[0] ),
'tmp_name' => $tmp
);
// Check for download errors
if ( is_wp_error( $tmp ) ) {
#unlink( $file_array['tmp_name'] );
return false;
}
$media_id = media_handle_sideload( $file_array, $postid );
// Check for handle sideload errors.
if ( is_wp_error( $media_id ) ) {
#unlink( $file_array['tmp_name'] );
return false;
}
if( $postid != 0 && $set_featured == 1 )
set_post_thumbnail( $postid, $media_id );
return $media_id;
}
?>
They are like 50 posts and each one has 10 large images.
Regards
The default execution time is 30 seconds so looks you are exceeding that. We have a similar script that downloads up to a couple thousand photos per run. Adding set_time_limit(60) to reset timer each loop fixed timeout issues. In your case you can probably just add at the beginning of the function. Just be very careful you don't get any infinite loops as they will run forever (or until the next reboot).
To make sure it works you can add the below as the first line inside your upload function
set_time_limit(0)
this will allow it to run until it's finished, but watch it as this will let it run forever which WILL hurt your servers available memory. But to see if the script works put that in there, then adjust to proper time if need be.
If you get another or the same error it will at least verify its not a time issue (error messages are not always factual).
The other possibility is that you are on a shared server and are exceeding their time allotment for you server. (continuous processor use for more then 30 seconds, as an example).

Incrementing variable between PHP requests

I am writing a PHP script which, upon a request, will make a call to a SOAP service with various parameters, some of which are taken from the request.
However, the particular SOAP service I am using requires that each request includes a unique ID, which in this case needs to increment for each request. It must not be based on time, and must be unique for each request, however it does not matter if values are skipped.
Using a MySQL data base to store a single value seems massively overkill. I have thought about storing and loading it into a file, but the issue of race conditions springs to mind.
I do have complete access to the server, which will be some kind of Linux flavour dedicated to this task.
Is there a simple way this can be achieved?
Before any new request get incremental value using PHP's time() function, since time will be unique for each request.
$increment_id = time();
If your application is single server you can try to store incremental ID in APC using:
$key = 'soap_service_name';
if (!apc_exists($key)) {
apc_store($key, 0);
}
$id = apc_inc($key);
You need to check if a key exists in APC cache and set 0, otherwise apc_inc fails and returns false
If you have multiserver application you can store incremental id in Memcache/Redis (that needs to run additional service):
$key = 'soap_service_name';
$memcache = memcache_connect('memcache_host', 11211);
if (!empty(memcache_exists($memcache, $key))) {
memcache_set($memcache, 0);
}
$id = memcache_increment($memcache, $key);
Same situation as APC if you call memcache_increment it will fail if key doesn't exists yet.
If that incremental ID should be stored persistently Redis would be more usefull because it has disk write of all data. It's kind of Memcache with disk write.
This is how I achieved this in the end. After considering the various options, databases and the various caching options seemed a bit overkill. In addition, caching, cookies and sessions seem to be designed to be relatively temporary, whereas I was really looking for a non-volatile solution.
This is what I came up with - a simple file locking solution. I hadn't realised PHP could deal with file locks but on discovering this, it seems the best way to go.
This example acquires an exclusive lock on the file, before reading and updating the value. If it hits max int, it resets. Then it waits for 5 seconds. If the script is called a few times in quick succession, observe that each request will wait for the lock to be release from the previous before continuing.
What's nice is, as this is PHP, non-existent file, invalid contents etc, will just cause the value to default to zero.
<?php
$f = fopen('sequence_num.txt', 'r+');
echo "Acquiring lock<br />\n";
flock($f, LOCK_EX);
echo "Lock acquired, updating value<br />\n";
$num = intval(fread($f, strlen(PHP_INT_MAX)));
echo "Old val = " . $num;
if ($num >= PHP_INT_MAX) {
$num = 0;
} else {
$num++;
}
echo " New val = " . $num;
echo "<br />Waiting 5 seconds<br />\n";
rewind($f);
ftruncate($f, 0);
fwrite($f, $num);
sleep(5);
echo "Releasing lock<br />\n";
flock($f, LOCK_UN);
fclose($f);
If you're happy to use a float as a unique value use:
$unique_id = microtime(true);
If you wish to simply increment, you may do so using a session var:
/**
* Get session increment.
*
* #param string $id
* #param int $default
* #return int
*/
function get_increment($id, $default = 0)
{
if (array_key_exists($id, $_SESSION)) $_SESSION[$id] += 1;
else $_SESSION[$id] = $default;
return $_SESSION[$id];
}
var_dump(get_increment('unique_id'));

Restrict multiple php script instances

I have a script that is running multiple times cause the validation is taking longer and allowing multiple instance of the script. It is supposed to run about once a day but yesterday script_start() ran 18 times all right around the same time.
add_action('init', 'time_validator');
function time_validator() {
$last = get_option( 'last_update' );
$interval = get_option( 'interval' );
$slop = get_option( 'interval_slop' );
if ( ( time() - $last ) > ( $interval + rand( 0, $slop ) ) ) {
update_option( 'last_update', time() );
script_start();
}
}
It sounds messy, that you've detected 18 instances of your script running although you don't want that. You should fix the code which calls those script instances.
However, you can implement this check into the script itself. To make sure that the script runs only once you should use flock(). I' ll give an example:
Add this to the top of your code that should run only once a time:
// open the lock file
$fd = fopen('lock.file', 'w+');
// try to obtain an exclusive lock. If another instance is currently
// obtaining the lock we'll just exit. (LOCK_NB makes flock not blocking)
if(!flock($fd, LOCK_EX | LOCK_NB)) {
die('process is already running');
}
... and this and the end of the critical code:
// release the lock
flock($fd, LOCK_UN);
// close the file
fclose($fd);
The method described is safe against race conditions, it really makes sure that a critical section runs only once.

PHP Prevent simultaneous function execution (throttling limits via PHP)

I have a function that send HTTP request via CURL to www.server.com
My task is to make sure that www.server.com gets no more than one request every 2 seconds.
Possible solution:
Create a function checktime() that will store current call time in database and check with database on every next call and make system pause for 2 seconds:
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) ) { // if its been 2 seconds
$this->setNewTimeInDatabase();
return true;
} else {
sleep(2);
return false;
}
The problem/question:
Lets say, the last request to www.server.com was on 1361951000. Then 10 other users attempt to do request on 1361951001 (1 seconds later). checktime() Function will be called.
As since it only has been 1 second, the function will return false. All 10 users will wait 2 seconds. Does it means that on 1361951003 there are 10 requests will be sent simultaneously? And is it possible that the time of last request will not be changed in database, because of the missed call of $this->setNewTimeInDatabase() in checktime()?
Thank you!
UPDATE:
I have just been told that using a loop might solve the problem:
for($i=0;$i<300;$i++)
{
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) ) { // if its been 2 seconds
$this->setNewTimeInDatabase();
return true;
} else {
sleep(2);
return false;
}
}
But i don't really see logic in it.
I believe you need some implementation of a semaphore. The database could work, as long as you can guarantee that only one thread gets to write to the db and then make the request.
For example, you might use an update request to the db and then check for the updated rows (in order to check whether the update actually happened). If the update was succesful you can assume you got the mutex lock and then make the request (assuming the time is right to make it). Something like this:
$oldTime = $this->getTimeFromDatabase();
if ($oldTime < (time() - 2) && $this->getLock()) { // if its been 2 seconds
$this->setNewTimeInDatabase();
$this->releaseLock();
return true;
} else {
sleep(2);
return false;
}
function getLock()
{
return $mysqli->query('UPDATE locktable set locked = 1 WHERE locked = 0');
}
function releaseLock()
{
$mysqli->query('UPDATE locktable set locked = 0');
}
I'm not sure about the mysql functions, but I believe it's ok to get the general idea.
Watch out with using a database. For example MySQL is not always 100% in sync with its sessions, and for that reason it is not safe to rely on that for locking purposes.
You could use a file-lock through the method flock, where you would save the access time in. Then you could be sure to lock the file, so no two or more processes would ever access it at the same time.
It would probably go something like this:
$filepath = "lockfile_for_source";
touch($filepath);
$fp = fopen("lockfile_for_resource", "r") or die("Could not open file.");
while(true){
while(!flock($fp, LOCK_EX)){
sleep(0.25); //wait to get file-lock.
}
$time = file_get_contents($filepath);
$diff = time() - $time;
if ($diff >= 2){
break;
}else{
flock($fp, LOCK_UN);
}
}
//Following code would never be executed simultaneously by two scripts.
//You should access and use your resource here.
fwrite($fp, time());
fflush($fp);
flock($fp, LOCK_UN); //remove lock on file.
fclose($fp);
Please be aware that I have not tested the code.

Categories