My script sometimes receives 2 identical requests at the same time (difference in milliseconds) from an external system.
The script, upon incoming request, makes a request to the external system, checks for the existence of an entry there, and if not, creates it.
The problem is that with simultaneous requests, the check for uniqueness fails and as a result 2 records are created.
I tried to do a random sleep but it didn't work.
$sleep = rand(1,5); sleep($sleep);
I would suggest using a fast caching system, like memcached or redis.
Have a check if the system is busy
If not busy, make system busy by adding a flag in cache
Process the request.
Unflag the busy flag.
While processing, if another request comes, checks if busy in memcache/redis. If system busy, just don't do anything.
I'm going to try some pseudo code here:
function processData($requestData)
{
$isSystemBusy = Cache::get('systemBusy');
if $isSystemBusy == true {
exit();
}
Cache::set('systemBusy', true);
//do your logic here
Cache::set('systemBusy', false);
}
Solution was to write lock file with ID:
$tmp_file = __DIR__.'/tmp/'.$origDealId.'.lock';
if (file_exists($tmp_file)) {
// duplicate request
return null;
} else {
// do something
}
Related
In a certain instance I want to cancel calls of users that already have an open session.
I use session_start to make sure, a logged in user can only execute one request at a time and that works fine. But all subsequent calls will simply block indefinitely until all previous calls went through which is unsatisfying in certain circumstances like misbehaving users.
Normally all blocking calls I know have a timeout parameter you give with them. Is there something like this for start_session?
Or is there a call in the spirit of session_opened_by_other_script that I can do before calling session_start?
For now my solution is to check if there is already a lock on the session file using exec an shell scripting. I don't recommend anyone using it who does not fully understand it.
Basically it tries to get a lock on the session file for the specified timeout value using flock. If it fails to do so it exists with 408 Request timeout. (or 429 Too many requests, if available)
For this to work you need to...
know your session ID at that point in time
have file based sessions
Note, that this is not atomic. It still can happen that multiple requests end up waiting in session_start. But it should be a rare event. Most calls should be canceled correctly, which was my agenda.
class Session {
static public function openWhenClosed() {
if (session_status() == PHP_SESSION_NONE) {
$sessionId = session_id();
if ($sessionId == null)
$sessionId = $_COOKIE[session_name()];
if ($sessionId != null) {
$sessFile = session_save_path()."/sess_".$sessionId;
if (file_exists($sessFile)) {
$timeout = 30; //How long to try to get hold of the session
$fd = 9; //File descriptor to use to try locking the session file.
/*
* This 'trick' is not atomic!!
* After exec returned and session_start() is called there is a time window
* where it can happen that other waiting calls get a successful lock and also
* proceed and get then blocked by session_start(). The longer the sleep value
* the less likely this is to happen. But also the longer the extra delay
* for the call
*/
$sleep = "0.01"; //10ms
//Check if session file is already locked by trying to get a lock on it.
//If it is, try again for $timeout seconds every $sleep seconds
exec("
exec $fd>>$sessFile;
while [ \$SECONDS -lt $timeout ]; do
flock -n $fd;
if [ \$? -eq 0 ]; then exit 0; fi;
sleep $sleep;
done;
exit 1;
", $null, $timedOut);
if ($timedOut) {
http_response_code(408); //408: Request Timeout. Or even better 429 if your apache supports it
die("Request canceled because another request is still running");
}
}
}
session_start();
}
}
}
Additional thoughts:
It is tempting to use flock -w <timeout> but that way far more
waiting in line calls will manage to use the time between exec and
start_session to obtain a lock and end up blocking in
session_start
If you use the browser for testing this, be aware that most browsers do command queing and reuse a limited amount of connectiosn. So they do not start sending your request before others finish. This can lead to seemingly strange results if you are not aware of this. You can more reliably test using several parallel wget commands.
I do not recommend to activate this for normal browser request. As mentioned in 2) this is already handled by the browser anyway in most cases. I only use it to protect my API against rouge implementations that do not wait for an answer before sending the next request.
The performance hit was negligible in my tests for my overall load. But I would advice to test in your environment yourself using microtime() calls
I'm trying to test a race condition in PHP. I'd like to have N PHP processes get ready to do something, then block. When I say "go", they should all execute the action at the same time. Hopefully this will demonstrate the race.
In Java, I would use Object.wait() and Object.notifyAll(). What can I use in PHP?
(Either Windows or linux native answers are acceptable)
Create a file "wait.txt"
Start N processes, each with the code shown below
Delete the "wait.txt" file.
...
<?php
while (file_exists('wait.txt')) {}
runRaceTest();
Usually with PHP file lock approach is used. One create a RUN_LOCK or similar file and asks for file_exists("RUN_LOCK"). This system is also used to secure potential endless loops in recursive threads.
I decided to require the file for the execution. Other approach may be, that existence of the file invokes the blocking algorithm. That depends on your situation. Always the safer situation should be the easier to achieve.
Wait code:
/*prepare the program*/
/* ... */
/*Block until its time to go*/
define("LOCK_FILE", "RUN_UNLOCK"); //I'd define this in some config.php
while(!file_exists(LOCK_FILE)) {
usleep(1); //No sleep will eat lots of CPU
}
/*Execute the main code*/
/* ... */
/*Delete the "run" file, so that no further executions should be allowed*/
usleep(1); //Just for sure - we want other processes to start execution phase too
if(file_exists(LOCK_FILE))
unlink(LOCK_FILE);
I guess it would be nice to have a blocking function for that, like this one:
function wait_for_file($filename, $timeout = -1) {
if($timeout>=0) {
$start = microtime(true)*1000; //Remember the start time
}
while(!file_exists($filename)) { //Check the file existence
if($timeout>=0) { //Only calculate when timeout is set
if((microtime(true)*1000-$start)>$timeout) //Compare current time with start time (current always will be > than start)
return false; //Return failure
}
usleep(1); //Save some CPU
}
return true; //Return success
}
It implements timeout. You don't need them but maybe someone else will.
Usage:
header("Content-Type: text/plain; charset=utf-8");
ob_implicit_flush(true);while (#ob_end_clean()); //Flush buffers so the output will be live stream
define("LOCK_FILE","RUN_FOREST_RUN"); //Define lock file name again
echo "Starting the blocking algorithm. Waiting for file: ".LOCK_FILE."\n";
if(wait_for_file(LOCK_FILE, 10000)) { //Wait for 10 secconds
echo "File found and deleted!\n";
if(file_exists(LOCK_FILE)) //May have been deleted by other proceses
unlink(LOCK_FILE);
}
else {
echo "Wait failed!\n";
}
This will output:
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
Wait failed!
~or~
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
File found and deleted!
PHP doesn't have multithreading. And its not planned to be implemented either.
You can try hacks with sockets though or 0MQ to communicate between multiple processes
See Why does PHP not support multithreading?
Php multithread
I'm building a chat function using Zend Framework.
In javascript, I use ajax to request to http://mydomain.com/chat/pull with function pullAction like this
public function pullAction() {
while ( true ) {
try {
$chat = Eezy_Chat::getNewMessage();
if($chat){
$chat->printMessage();
break;
}
sleep ( 1 ); // sleep 1 secound between each loop
} catch ( Zend_Db_Adapter_Exception $ex ) {
if ($ex->getCode () == 2006) { // reconnect db if timeout
$dbAdapter = Zend_Db_Table::getDefaultAdapter ();
$dbAdapter->closeConnection ();
$dbAdapter->getConnection ();
}
}
}
}
This action will running until other user send some message.
But while this request is running, I can not go to any other page on my site. All of them wait for http://mydomain.com/chat/pull to finished it execution.
I searching for a solution all over Google but still not found.
Thank for your help.
This sounds like Session locking.
When you use Sessions stored on the file system, PHP will lock the session file on each request and only give it free when that request is through. While the file is locked, any other requests wanting to access that file will hang and wait.
Since your chat script will loop forever, checking for new messages, the session file will be locked forever, too, preventing the same user from accessing different sections of the site requiring session access as well.
A solution is to load all the Session Data required to fulfill a Request into memory and then use Zend_Session::writeClose as soon as possible to release the lock.
I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:
Browser user goes on www.example.com/status
Browser contacts www.example.com/status
PHP Server receives request and ask for stauts on www.statusserver.com/status
PHP Receives the data, transforms it in readable HTML output and send it back to the client
Browser user can see the status.
Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.
That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!
Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....
thx for any hints
=============================
EDIT:
I already use a caching on harddrive:
public function getFile($filename)
{
$diff = (time()-filemtime($filename));
//echo "diff:$diff<br/>";
if($diff>8){
//echo 'grösser 8<br/>';
self::updateFile($filename);
}
if (is_readable($filename)) {
try {
$returnValue = #ImageCreateFromPNG($filename);
if($returnValue == ''){
sleep(1);
return self::getFile($filename);
}else{
return $returnValue;
}
} catch (Exception $e){
sleep(1);
return self::getFile($filename);
}
} else {
sleep(1);
return self::getFile($filename);
}
}
this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.
I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it.
This standalone could be a perl script or something similar...
Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.
The method which query the server status should have this logic
public function getStatus() {
if (!$status = $cache->load()) {
// cache miss
$status = // do your query here
$cache->save($status); // store the result in cache
}
return $status;
}
In this way only one request of X will fetch the real status. The X value depends on your cache configuration.
Some cache library you can use:
APC
Memcached
Zend_Cache which is just a wrapper for actual caching engines
Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.
Update
Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?
You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become
public function getFile($filename)
{
$latest_update = apc_fetch('latest_update');
if (false == $latest_update) {
// cache expired or first request
apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
// fetch file here and save on local storage
self::updateFile($filename);
}
// here you can process the file
return $your_processed_file;
}
With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.
Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.
Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.
As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php
I have several time consuming database queries to run. Each has been built to be triggered from an option chosen on a web page. I thought I was being quite cunning by firing off the work via several AJAX requests.
I presumed that multiple requests would be split over multiple processes/threads meaning the work would be completed relatively quickly for the user.
However, the requests seem to be processed in serial, meaning that no speed benefit is felt by the user.
Worse still, AJAX requests to update the page also wait in line, meaning they fail to respond until the previous requests have all completed.
I have read that this may be caused by the PHP sessions being locked.
What is the usual approach for this kind of issue?
Is there a way to force AJAX requests to work asynchronously?
Can I stop PHP from locking the sessions?
Should I use a seperate process via cron to fire background workings?
Thanks!
NB This project has been built using the symfony framework.
AJAX uses jQuery
// Get the content
$.get('/ajax/itemInformation/slug/'+slug, function(data) {
$('#modal-more-information').html(data);
});
If you are using sessions at all during any of the given AJAX requests, they will effectively execute serially, in order of request. This is due to locking of the session data file at the operating system level. The key to getting those requests to be asynchronous is to close (or never start) the session as quickly as possible.
You can use session_write_close (docs) to close the session as soon as possible. I like to use a couple of helper functions for this, the set_session_var function below will open the session, write the var, then close the session - in and out as quickly as possible. When the page loads, you can call session_start to get the $_SESSION variable populated, then immediately call session_write_close. From then on out, only use the set function below to write.
The get function is completely optional, since you could simply refer to the $_SESSION global, but I like to use this because it provides for a default value and I can have one less ternary in the main body of the code.
function get_session_var($key=false, $default=null) {
if ($key == false || strlen($key) < 0)
return false;
if (isset($_SESSION[$key]))
$ret = $_SESSION[$key];
else
$ret = $default;
return $ret;
}
function set_session_var($key=false, $value=null) {
if ($key == false || strlen($key) < 0)
return false;
session_start();
if ($value === null)
unset($_SESSION[$key]);
else
$_SESSION[$key] = $value;
session_write_close();
}
Be aware that there are a whole new set of considerations once the AJAX requests are truly asynchronous. Now you have to watch out for race conditions (you have to be wary of one request setting a variable that can impact another request) - for you see, with the sessions closed, one request's changes to $_SESSION will not be visible to another request until it rebuilds the values. You can help avoid this by "rebuilding" the $_SESSION variable immediately before a critical use:
function rebuild_session() {
session_start();
session_write_close();
}
... but this is still susceptible to a race condition.