How to lock file in PHP? - php

I'm trying to create a PHP file, which wouldn't run if it's already running. Here's the code I'm using:
<?php
class Test {
private $tmpfile;
public function action_run() {
$this->die_if_running();
$this->run();
}
private function die_if_running() {
$this->tmpfile = #fopen('.refresher2.pid', "w");
$locked = #flock($this->tmpfile, LOCK_EX|LOCK_NB);
if (! $locked) {
#fclose($this->tmpfile);
die("Running 2");
}
}
private function run() {
echo "NOT RUNNNING";
sleep(100);
}
}
$test = new Test();
$test->action_run();
The problem is, when I run this from console, it works great. But when I try to run it from browser, many instances can run simultaneously. This is on Windows 7, XAMPP, PHP 5.3.2. I guess OS is thinking that it's the same process and thus the functionality falls. Is there a cross-platform way to create a PHP script of this type?

Not really anything to promising. You can't use flock for that like this.
You could use system() to start another (php) process that does the locking for you. But drawbacks:
You need to do interprocess communication. Think about a way how to tell the other program when to release the lock etc. You can use stdin for messenging und use 3 constants or something. In this case it's still rather simple
It's bad for performance because you keep creating processes which is expensive.
Another way would be to start another program that runs all the time. You connect to it using some means of IPC (probably just use a tcp channel because it's cross-platform) and allow this program to manage file acces. That program could be a php script in an endless loop as well, but it will probably be simpler to code this in Java or another language that has multithreading support.
Another way would be to leverage existing ressources. Create a dummy database table for locks, create an entry for the file and then do table-row-locking.
Another way would be not to use files, but a database.

I had a similar problem a while ago.
I needed to have a counter where the number returned was unique.
I used a lock-file and only if this instance was able to create the lock-file was it allowed to read the file with the current number.
Instead of counting up perhaps you can allow the script to run.
The trick is to let try a few times (like 5) with a small wait/sleep in between.
function GetNextNumber()
{
$lockFile = "lockFile.txt";
$lfh = #fopen($lockFile, "x");
if (!$lfh)
{
$lockOkay = false;
$count = 0;
$countMax = 5;
// Try ones every second in 5 seconds
while (!$lockOkay & $count < $countMax)
{
$lfh = #fopen($lockFile, "x");
if ($lfh)
{
$lockOkay = true;
}
else
{
$count++;
sleep(1);
}
}
}
if ($lfh)
{
$fh = fopen($myFile, 'r+') or die("Too many users. ");
flock($fh, LOCK_EX);
$O_nextNumber = fread($fh, 15);
$O_nextNumber = $O_nextNumber + 1;
rewind($fh);
fwrite($fh, $O_knr);
flock($fh, LOCK_UN);
fclose($fh);
unlink($lockFile); // Sletter lockfilen
}
return $O_nextNumber;
}

Related

Using a file lock to ensure a PHP Daemon script is only running once

I wrote a simple function used to determine if a script is already running, and prevent duplicate processes:
class SomeClass{
public static function processLock($name)
{
$lockFile = "/tmp/" . $name;
$fp = fopen($lockFile, "w+");
if ($fp === false)
{
echo "Already running.\n";
exit;
}
else
{
if ( ! flock($fp, LOCK_EX | LOCK_NB))
{
echo "Already running.\n";
exit;
}
}
return $fp;
}
}
I can then call the function like this at the top of the php script:
Script_A:
<?
include_once "/SomeClass.php";
$lock = SomeClass::processLock("script_A_lock");
and this works great..... 99% of the time. However, sometimes I'll discover that "script_A" isn't running (it's supposed to always be running).
I then run lsof /tmp/script_A_lock to see why "script_A" isn't starting.
The results make no sense! I get something like this:
COMMAND PID ......... NAME
Script_B 234 ......... script_A_lock
An unrelated script "script_B" has somehow stolen the file lock!
So:
How is that happening? The word "script_A_lock" ONLY appears in Script_A (I searched the entire project) and Script_A isn't included ANYWHERE.
How can I prevent this from happening? Obviously, only Script A should hold the "script_A_lock".
This is the approach I usually use for a lock file. If the file doesn't exist, create it, which indicates the process is executing. Be sure to unlink it at the end.
#!/usr/bin/php
<?php
define('LOCK_FILE',__FILE__.'lock');
if (!is_file(LOCK_FILE)) {
touch(LOCK_FILE);
echo 'Do stuff'.PHP_EOL;
unlink(LOCK_FILE);
} else {
echo 'Lock file found'.PHP_EOL;
}
The flock function is intended to support multiple scripts accessing a single file, it's not mean to be used as a lock file. http://php.net/manual/en/function.flock.php

Don't wait for file_get_contents() to finish

Is there a way to call multiple file_get_contents without waiting first one to finish. I have a few PHP scripts that do some heavy work and execution time of one is more than 1.5 min so I want to call them all at the same time to reduce execution time. Currently I have multiple file_get_contents but to run next file_get_contents the first one needs to finish first. I tried exec method but it's blocked on server (shared hosting).
Well there are ways of running tasks in parallel if thats what you're trying to achieve but bare in mind PHP is a high level scripting language and the original purpose of it was to serve stateless HTTP request.
There are extensions out there like Gearman, this extension allows applications to complete tasks in parallel.
Obviously this will be in vane for you as you're probably using a service for just hosting. Get something like a vps, a vps from OVH is cheaper than most web hosting services.
Yes, it is possible! But it is little bit tricky!
You have to play with forks.
I agree about queues and else... but just for the sake of ability, here is an example:
<?php
declare(ticks = 1);
$a = [
'https://twitter.com',
'https://facebook.com',
'https://stackoverflow.com',
'https://linkedin.com',
'https://github.com',
];
// Count of php threads (forks or processes).
$max = 3;
// Child counter.
$child = 0;
pcntl_signal(SIGCHLD, function ($signo) {
global $child;
if ($signo === SIGCLD) {
while (($pid = pcntl_wait($signo, WNOHANG)) > 0) {
$signal = pcntl_wexitstatus($signo);
$child--;
}
}
});
foreach ($a as $item) {
while ($child >= $max) {
sleep(1);
}
$child++;
$pid = pcntl_fork();
if ($pid) {
} else {
// Child fork.
sleep(1);
// HERE YOUR CODE:
echo file_get_contents($item);
exit(0);
}
}
while ($child != 0) {
sleep(1);
}

Best way to offload one-shot worker threads in PHP? pthreads? fcntl?

How should I multithread some php-cli code that needs a timeout?
I'm using PHP 5.6 on Centos 6.6 from the command line.
I'm not very familiar with multithreading terminology or code. I'll simplify the code here but it is 100% representative of what I want to do.
The non-threaded code currently looks something like this:
$datasets = MyLibrary::getAllRawDataFromDBasArrays();
foreach ($datasets as $dataset) {
MyLibrary::processRawDataAndStoreResultInDB($dataset);
}
exit; // just for clarity
I need to prefetch all my datasets, and each processRawDataAndStoreResultInDB() cannot fetch it's own dataset. Sometimes processRawDataAndStoreResultInDB() takes too long to process a dataset, so I want to limit the amount of time it has to process it.
So you can see that making it multithreaded would
Speed it up by allowing multiple processRawDataAndStoreResultInDB() to execute at the same time
Use set_time_limit() to limit the amount of time each one has to process each dataset
Notice that I don't need to come back to my main program. Since this is a simplification, you can trust that I don't want to collect all the processed datasets and do a single save into the DB after they are all done.
I'd like to do something like:
class MyWorkerThread extends SomeThreadType {
public function __construct($timeout, $dataset) {
$this->timeout = $timeout;
$this->dataset = $dataset;
}
public function run() {
set_time_limit($this->timeout);
MyLibrary::processRawDataAndStoreResultInDB($this->dataset);
}
}
$numberOfThreads = 4;
$pool = somePoolClass($numberOfThreads);
$pool->start();
$datasets = MyLibrary::getAllRawDataFromDBasArrays();
$timeoutForEachThread = 5; // seconds
foreach ($datasets as $dataset) {
$thread = new MyWorkerThread($timeoutForEachThread, $dataset);
$thread->addCallbackOnTerminated(function() {
if ($this->isTimeout()) {
MyLibrary::saveBadDatasetToDb($dataset);
}
}
$pool->addToQueue($thread);
}
$pool->waitUntilAllWorkersAreFinished();
exit; // for clarity
From my research online I've found the PHP extension pthreads which I can use with my thread-safe php CLI, or I could use the PCNTL extension or a wrapper library around it (say, Arara/Process)
https://github.com/krakjoe/pthreads (and the example directory)
https://github.com/Arara/Process (pcntl wrapper)
When I look at them and their examples though (especially the pthreads pool example) I get confused quickly by the terminology and which classes I should use to achieve the kind of multithreading I'm looking for.
I even wouldn't mind creating the pool class myself, if I had a isRunning(), isTerminated(), getTerminationStatus() and execute() function on a thread class, as it would be a simple queue.
Can someone with more experience please direct me to which library, classes and functions I should be using to map to my example above? Am I taking the wrong approach completely?
Thanks in advance.
Here comes an example using worker processes. I'm using the pcntl extension.
/**
* Spawns a worker process and returns it pid or -1
* if something goes wrong.
*
* #param callback function, closure or method to call
* #return integer
*/
function worker($callback) {
$pid = pcntl_fork();
if($pid === 0) {
// Child process
exit($callback());
} else {
// Main process or an error
return $pid;
}
}
$datasets = array(
array('test', '123'),
array('foo', 'bar')
);
$maxWorkers = 1;
$numWorkers = 0;
foreach($datasets as $dataset) {
$pid = worker(function () use ($dataset) {
// Do DB stuff here
var_dump($dataset);
return 0;
});
if($pid !== -1) {
$numWorkers++;
} else {
// Handle fork errors here
echo 'Failed to spawn worker';
}
// If $maxWorkers is reached we need to wait
// for at least one child to return
if($numWorkers === $maxWorkers) {
// $status is passed by reference
$pid = pcntl_wait($status);
echo "child process $pid returned $status\n";
$numWorkers--;
}
}
// (Non blocking) wait for the remaining childs
while(true) {
// $status is passed by reference
$pid = pcntl_wait($status, WNOHANG);
if(is_null($pid) || $pid === -1) {
break;
}
if($pid === 0) {
// Be patient ...
usleep(50000);
continue;
}
echo "child process $pid returned $status\n";
}

Executing functions in parallel

I have a function that needs to go over around 20K rows from an array, and apply an external script to each. This is a slow process, as PHP is waiting for the script to be executed before continuing with the next row.
In order to make this process faster I was thinking on running the function in different parts, at the same time. So, for example, rows 0 to 2000 as one function, 2001 to 4000 on another one, and so on. How can I do this in a neat way? I could make different cron jobs, one for each function with different params: myFunction(0, 2000), then another cron job with myFunction(2001, 4000), etc. but that doesn't seem too clean. What's a good way of doing this?
If you'd like to execute parallel tasks in PHP, I would consider using Gearman. Another approach would be to use pcntl_fork(), but I'd prefer actual workers when it's task based.
The only waiting time you suffer is between getting the data and processing the data. Processing the data is actually completely blocking anyway (you just simply have to wait for it). You will not likely gain any benefits past increasing the number of processes to the number of cores that you have. Basically I think this means the number of processes is small so scheduling the execution of 2-8 processes doesn't sound that hideous. If you are worried about not being able to process data while retrieving data, you could in theory get your data from the database in small blocks, and then distribute the processing load between a few processes, one for each core.
I think I align more with the forking child processes approach for actually running the processing threads. There is a brilliant demonstration in the comments on the pcntl_fork doc page showing an implementation of a job daemon class
http://php.net/manual/en/function.pcntl-fork.php
<?php
declare(ticks=1);
//A very basic job daemon that you can extend to your needs.
class JobDaemon{
public $maxProcesses = 25;
protected $jobsStarted = 0;
protected $currentJobs = array();
protected $signalQueue=array();
protected $parentPID;
public function __construct(){
echo "constructed \n";
$this->parentPID = getmypid();
pcntl_signal(SIGCHLD, array($this, "childSignalHandler"));
}
/**
* Run the Daemon
*/
public function run(){
echo "Running \n";
for($i=0; $i<10000; $i++){
$jobID = rand(0,10000000000000);
while(count($this->currentJobs) >= $this->maxProcesses){
echo "Maximum children allowed, waiting...\n";
sleep(1);
}
$launched = $this->launchJob($jobID);
}
//Wait for child processes to finish before exiting here
while(count($this->currentJobs)){
echo "Waiting for current jobs to finish... \n";
sleep(1);
}
}
/**
* Launch a job from the job queue
*/
protected function launchJob($jobID){
$pid = pcntl_fork();
if($pid == -1){
//Problem launching the job
error_log('Could not launch new job, exiting');
return false;
}
else if ($pid){
// Parent process
// Sometimes you can receive a signal to the childSignalHandler function before this code executes if
// the child script executes quickly enough!
//
$this->currentJobs[$pid] = $jobID;
// In the event that a signal for this pid was caught before we get here, it will be in our signalQueue array
// So let's go ahead and process it now as if we'd just received the signal
if(isset($this->signalQueue[$pid])){
echo "found $pid in the signal queue, processing it now \n";
$this->childSignalHandler(SIGCHLD, $pid, $this->signalQueue[$pid]);
unset($this->signalQueue[$pid]);
}
}
else{
//Forked child, do your deeds....
$exitStatus = 0; //Error code if you need to or whatever
echo "Doing something fun in pid ".getmypid()."\n";
exit($exitStatus);
}
return true;
}
public function childSignalHandler($signo, $pid=null, $status=null){
//If no pid is provided, that means we're getting the signal from the system. Let's figure out
//which child process ended
if(!$pid){
$pid = pcntl_waitpid(-1, $status, WNOHANG);
}
//Make sure we get all of the exited children
while($pid > 0){
if($pid && isset($this->currentJobs[$pid])){
$exitCode = pcntl_wexitstatus($status);
if($exitCode != 0){
echo "$pid exited with status ".$exitCode."\n";
}
unset($this->currentJobs[$pid]);
}
else if($pid){
//Oh no, our job has finished before this parent process could even note that it had been launched!
//Let's make note of it and handle it when the parent process is ready for it
echo "..... Adding $pid to the signal queue ..... \n";
$this->signalQueue[$pid] = $status;
}
$pid = pcntl_waitpid(-1, $status, WNOHANG);
}
return true;
}
}
you can use "PTHREADS"
very easy to install and works great on windows
download from here -> http://windows.php.net/downloads/pecl/releases/pthreads/2.0.4/
Extract the zip file and then
move the file 'php_pthreads.dll' to php\ext\ directory.
move the file 'pthreadVC2.dll' to php\ directory.
then add this line in your 'php.ini' file:
extension=php_pthreads.dll
save the file.
you just done :-)
now lets see example of how to use it:
class ChildThread extends Thread {
public $data;
public function run() {
/* Do some expensive work */
$this->data = 'result of expensive work';
}
}
$thread = new ChildThread();
if ($thread->start()) {
/*
* Do some expensive work, while already doing other
* work in the child thread.
*/
// wait until thread is finished
$thread->join();
// we can now even access $thread->data
}
for more information about PTHREADS read php docs here:
PHP DOCS PTHREADS
if you'r using WAMP like me, then you should add 'pthreadVC2.dll' into
\wamp\bin\apache\ApacheX.X.X\bin
and also edit the 'php.ini' file (same path) and add the same line as before
extension=php_pthreads.dll
GOOD LUCK!
What you are looking for is parallel which is a succinct concurrency API for PHP 7.2+
$runtime = new \parallel\Runtime();
$future = $runtime->run(function() {
for ($i = 0; $i < 500; $i++) {
echo "*";
}
return "easy";
});
for ($i = 0; $i < 500; $i++) {
echo ".";
}
printf("\nUsing \\parallel\\Runtime is %s\n", $future->value());
Output:
.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*
Using \parallel\Runtime is easy
Have a look at pcntl_fork. This allows you to spawn child processes which can then do the separate work that you need.
Not sure if a solution for your situation but you can redirect the output of system calls to a file, thus PHP will not wait until the program is finished. Although this may result in overloading your server.
http://www.php.net/manual/en/function.exec.php - If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
There's Guzzle with its concurrent requests
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
$responses = Promise\Utils::unwrap($promises);
There's the overhead of promises; but more importantly Guzzle only works with HTTP requests and it works with version 7+ and frameworks like Laravel.

best way to obtain a lock in php

I'm trying to update a variable in APC, and will be many processes trying to do that.
APC doesn't provide locking functionality, so I'm considering using other mechanisms... what I've found so far is mysql's GET_LOCK(), and php's flock(). Anything else worth considering?
Update: I've found sem_acquire, but it seems to be a blocking lock.
/*
CLASS ExclusiveLock
Description
==================================================================
This is a pseudo implementation of mutex since php does not have
any thread synchronization objects
This class uses flock() as a base to provide locking functionality.
Lock will be released in following cases
1 - user calls unlock
2 - when this lock object gets deleted
3 - when request or script ends
==================================================================
Usage:
//get the lock
$lock = new ExclusiveLock( "mylock" );
//lock
if( $lock->lock( ) == FALSE )
error("Locking failed");
//--
//Do your work here
//--
//unlock
$lock->unlock();
===================================================================
*/
class ExclusiveLock
{
protected $key = null; //user given value
protected $file = null; //resource to lock
protected $own = FALSE; //have we locked resource
function __construct( $key )
{
$this->key = $key;
//create a new resource or get exisitng with same key
$this->file = fopen("$key.lockfile", 'w+');
}
function __destruct()
{
if( $this->own == TRUE )
$this->unlock( );
}
function lock( )
{
if( !flock($this->file, LOCK_EX | LOCK_NB))
{ //failed
$key = $this->key;
error_log("ExclusiveLock::acquire_lock FAILED to acquire lock [$key]");
return FALSE;
}
ftruncate($this->file, 0); // truncate file
//write something to just help debugging
fwrite( $this->file, "Locked\n");
fflush( $this->file );
$this->own = TRUE;
return TRUE; // success
}
function unlock( )
{
$key = $this->key;
if( $this->own == TRUE )
{
if( !flock($this->file, LOCK_UN) )
{ //failed
error_log("ExclusiveLock::lock FAILED to release lock [$key]");
return FALSE;
}
ftruncate($this->file, 0); // truncate file
//write something to just help debugging
fwrite( $this->file, "Unlocked\n");
fflush( $this->file );
$this->own = FALSE;
}
else
{
error_log("ExclusiveLock::unlock called on [$key] but its not acquired by caller");
}
return TRUE; // success
}
};
You can use the apc_add function to achieve this without resorting to file systems or mysql. apc_add only succeeds when the variable is not already stored; thus, providing a mechanism of locking. TTL can be used to ensure that falied lockholders won't keep on holding the lock forever.
The reason apc_add is the correct solution is because it avoids the race condition that would otherwise exist between checking the lock and setting it to 'locked by you'. Since apc_add only sets the value if it's not already set ( "adds" it to the cache ), it ensures that the lock can't be aquired by two calls at once, regardless of their proximity in time. No solution that doesn't check and set the lock at the same time will inherently suffer from this race condition; one atomic operation is required to successfully lock without race condition.
Since APC locks will only exist in the context of that php execution, it's probably not the best solution for general locking, as it doesn't support locks between hosts. Memcache also provides an atomic add function and thus can also be used with this technique - which is one method of locking between hosts. Redis also supports atomic 'SETNX' functions and TTL, and is a very common method of locking and synchronization between hosts. Howerver, the OP requests a solution for APC in particular.
If the point of the lock is to prevent multiple processes from trying to populate an empty cache key, why wouldn't you want to have a blocking lock?
$value = apc_fetch($KEY);
if ($value === FALSE) {
shm_acquire($SEMAPHORE);
$recheck_value = apc_fetch($KEY);
if ($recheck_value !== FALSE) {
$new_value = expensive_operation();
apc_store($KEY, $new_value);
$value = $new_value;
} else {
$value = $recheck_value;
}
shm_release($SEMAPHORE);
}
If the cache is good, you just roll with it. If there's nothing in the cache, you get a lock. Once you have the lock, you'll need to double-check the cache to make sure that, while you were waiting to get the lock, the cache wasn't repopulated. If the cache was repopulated, use that value & release the lock, otherwise, you do the computation, populate the cache & then release your lock.
Actually, check to see if this will work better then Peter's suggestion.
http://us2.php.net/flock
use an exclusive lock and if your comfortable with it, put everything else that attempted to lock the file in a 2-3 second sleep. If done right your site will experience a hang regarding the locked resource but not a horde of scripts fighting to cache the samething.
If you don't mind basing your lock on the filesystem, then you could use fopen() with mode 'x'. Here is an example:
$f = fopen("lockFile.txt", 'x');
if($f) {
$me = getmypid();
$now = date('Y-m-d H:i:s');
fwrite($f, "Locked by $me at $now\n");
fclose($f);
doStuffInLock();
unlink("lockFile.txt"); // unlock
}
else {
echo "File is locked: " . file_get_contents("lockFile.txt");
exit;
}
See www.php.net/fopen
I realize this is a year old, but I just stumbled upon the question while doing some research myself on locking in PHP.
It occurs to me that a solution might be possible using APC itself. Call me crazy, but this might be a workable approach:
function acquire_lock($key, $expire=60) {
if (is_locked($key)) {
return null;
}
return apc_store($key, true, $expire);
}
function release_lock($key) {
if (!is_locked($key)) {
return null;
}
return apc_delete($key);
}
function is_locked($key) {
return apc_fetch($key);
}
// example use
if (acquire_lock("foo")) {
do_something_that_requires_a_lock();
release_lock("foo");
}
In practice I might throw another function in there to generate a key to use here, just to prevent collision with an existing APC key, e.g.:
function key_for_lock($str) {
return md5($str."locked");
}
The $expire parameter is a nice feature of APC to use, since it prevents your lock from being held forever if your script dies or something like that.
Hopefully this answer is helpful for anyone else who stumbles here a year later.
EAccelerator has methods for it; eaccelerator_lock and eaccelerator_unlock.
Can't say if this is the best way to handle the job, but at least it is convenient.
function WhileLocked($pathname, callable $function, $proj = ' ')
{
// create a semaphore for a given pathname and optional project id
$semaphore = sem_get(ftok($pathname, $proj)); // see ftok for details
sem_acquire($semaphore);
try {
// capture result
$result = call_user_func($function);
} catch (Exception $e) {
// release lock and pass on all errors
sem_release($semaphore);
throw $e;
}
// also release lock if all is good
sem_release($semaphore);
return $result;
}
Usage is as simple as this.
$result = WhileLocked(__FILE__, function () use ($that) {
$this->doSomethingNonsimultaneously($that->getFoo());
});
Third optional argument can come handy if you use this function more than once per file.
Last but not least it isn't hard to modify this function (while keeping its signature) to use any other kind of locking mechanism at a later date, e.g. if you happen to find yourself working with multiple servers.
APC is now considered unmaintained and dead. It's successor APCu offers locking via apcu_entry. But be aware, that it also prohibits the concurrent execution of any other APCu functions. Depending on your use case, this might be OK for you.
From the manual:
Note: When control enters apcu_entry() the lock for the cache is acquired exclusively, it is released when control leaves apcu_entry(): In effect, this turns the body of generator into a critical section, disallowing two processes from executing the same code paths concurrently. In addition, it prohibits the concurrent execution of any other APCu functions, since they will acquire the same lock.
APCu has apcu_entry since 5.1.0, can implement a lock mechanism with it now:
/** get a lock, will wait until the lock is available,
* make sure handle deadlock yourself :p
*
* useage : $lock = lock('THE_LOCK_KEY', uniqid(), 50);
*
* #param $lock_key : the lock you want to get it
* #param $lock_value : the unique value to specify lock owner
* #param $retry_millis : wait befor retry
* #return ['lock_key'=>$lock_key, 'lock_value'=>$lock_value]
*/
function lock($lock_key, $lock_value, $retry_millis) {
$got_lock = false;
while (!$got_lock) {
$fetched_lock_value = apcu_entry($lock_key, function ($key) use ($lock_value) {
return $lock_value;
}, 100);
$got_lock = ($fetched_lock_value == $lock_value);
if (!$got_lock) usleep($retry_millis*1000);
}
return ['lock_key'=>$lock_key, 'lock_value'=>$lock_value];
}
/** release a lock
*
* usage : unlock($lock);
*
* #param $lock : return value of function lock
*/
function unlock($lock) {
apcu_delete($lock['lock_key']);
}
What I've found, actually, is that I don't need any locking at all... given what I'm trying to create is a map of all the class => path associations for autoload, it doesn't matter if one process overwrites what the other one has found (it's highly unlikely, if coded properly), because the data will get there eventually anyway. So, the solution turned out to be "no locks".

Categories