In php, i create a cache file in order to stock complexe results variables. One variable, one cache file. Well done its work fine.
The problem lies in the term of the cache. For the moment i put into the file the timeout and the variable, but its not optimized because i must open the file to check the timeout.
I want to (if its possible) check the timeout to a file property (like date last modified with the function filemtime()). Can we add custom property to a file?
The other way is to add the timeout in the filename, not my favorite solution.
[Edit]
final class Cache_Var extends Cache {
public static function put($key, $value, $timeout=0) {
// different timeout by variable (if 0, infinite timeout)
}
public static function get($key) {
// no timeout to get a var cache
// return null if file not found, or if timeout expire
// return var otherwise
}
}
filectime()can really help you
$validity = 60 * 60; // 3600s = 1 hour
if(filectime($filename) > time() - $validity) {
// cache is valid
} else {
// cache is invalid: recreate it
}
There are some caching fdrameworks around that use exactly this mechanism.
Edit:
If you need different timeouts per cache item than use touch()to set modification time of cache files. You could even set modification time to a future value and directly compare filectime with current time.
final class Cache_Var extends Cache {
public static function put($key, $value, $timeout=0) {
// different timeout by variable (if 0, infinite timeout)
// ...
touch($filename, time() + $timeout);
// For static files with unlimited lifetime I would simply store
// them in a separate folder
}
}
Related
I want to prevent a user from making the same request two times by using the Symfony Lock component. Because now users can click on a link two times(by accident?) and duplicate entities are created. I want to use the Unique Entity Constraint which does not protect against race conditions itself.
The Symfony Lock component does not seem to work as expected. When I create a lock in the beginning of a page and open the page two times at the same time the lock can be acquired by both requests. When I open the test page in a standard and incognito browser window the second request doesn't acquire the lock. But I can't find anything in the docs about this being linked to a session. I have created a small test file in a fresh project to isolate the problem. This is using php 7.4 symfony 5.3 and the lock component
<?php
namespace App\Controller;
use Sensio\Bundle\FrameworkExtraBundle\Configuration\Template;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\Lock\LockFactory;
use Symfony\Component\Routing\Annotation\Route;
class LockTest extends AbstractController
{
/**
* #Route("/test")
* #Template("lock/test.html.twig")
*/
public function test(LockFactory $factory): array
{
$lock = $factory->createLock("test");
$acquired = $lock->acquire();
dump($lock, $acquired);
sleep(2);
dump($lock->isAcquired());
return ["message" => "testing"];
}
}
I slightly rewrote your controller like this (with symfony 5.4 and php 8.1):
class LockTestController extends AbstractController
{
#[Route("/test")]
public function test(LockFactory $factory): JsonResponse
{
$lock = $factory->createLock("test");
$t0 = microtime(true);
$acquired = $lock->acquire(true);
$acquireTime = microtime(true) - $t0;
sleep(2);
return new JsonResponse(["acquired" => $acquired, "acquireTime" => $acquireTime]);
}
}
It waits for the lock to be released and it counts the time the controller waits for the lock to be acquired.
I ran two requests with curl against a caddy server.
curl -k 'https://localhost/test' & curl -k 'https://localhost/test'
The output confirms one request was delayed while the first one slept with the acquired lock.
{"acquired":true,"acquireTime":0.0006971359252929688}
{"acquired":true,"acquireTime":2.087146043777466}
So, the lock works to guard against concurrent requests.
If the lock is not blocking:
$acquired = $lock->acquire(false);
The output is:
{"acquired":true,"acquireTime":0.0007710456848144531}
{"acquired":false,"acquireTime":0.00048804283142089844}
Notice how the second lock is not acquired. You should use this flag to reject the user's request with an error instead of creating the duplicate entity.
If the two requests are sufficiently spaced apart to each get the lock in turn, you can check that the entity exists (because it had time to be fully committed to the db) and return an error.
Despite those encouraging results, the doc mentions this note:
Unlike other implementations, the Lock Component distinguishes lock instances even when they are created for the same resource. It means that for a given scope and resource one lock instance can be acquired multiple times. If a lock has to be used by several services, they should share the same Lock instance returned by the LockFactory::createLock method.
I understand two locks acquired by two distinct factories should not block each other. Unless the note is outdated or wrongly phrased, it seems possible to have non working locks under some circumstances. But not with the above test code.
StreamedResponse
A lock is released when it goes out of scope.
As a special case, when a StreamedResponse is returned, the lock goes out of scope when the response is returned by the controller. But the StreamedResponse has yet to return anything!
To keep the lock while the response is generated, it must be passed to the function executed by the StreamedResponse:
public function export(LockFactory $factory): Response
{
// create a lock with a TTL of 60s
$lock = $factory->createLock("test", 60);
if (!$lock->acquire(false)) {
return new Response("Too many downloads", Response::HTTP_TOO_MANY_REQUESTS);
}
$response = new StreamedResponse(function () use ($lock) {
// now $lock is still alive when this function is executed
$lockTime = time();
while (have_some_data_to_output()) {
if (time() - $lockTime > 50) {
// refresh the lock well before it expires to be on safe side
$lock->refresh();
$lockTime = time();
}
output_data();
}
$lock->release();
};
$response->headers->set('Content-Type', 'text/csv');
// lock would be released here if it wasn't passed to the StreamedResponse
return $response;
}
The above code refreshes the lock every 50s to cut down on communication time with the storage engine (such as redis).
The lock remains locked for at most 60s should the php process suddenly die.
I wrote a class that syncs the db from an xml file and reports through email any alerts.
The xml contains product prices and stock.
The execution of the method only occurs only if the xml filetime is newer than the last one synced.
Here is the first problem. I suspect that server (randomly) changes the filetime for some reason, becuse the sync method runs although no new xml file produced.
The xml file is exported from a local server and uploads to the remote server through an ftp client
(SyncBack)
Second problem is that on heavy traffic hours, the do_sync method runs more than once because i get the alerts more than once into my email.
I understand why it is called many times, so i created a flag syncing_now, to prevent the execution.
The mistake is that the flag is stored into db and since the first call has to update the db, all other call can run the method.
<?php class Sync extends Model{
public function __construct(){
parent::__construct();
$this->syncing_now = $this->db->get($syncing_now);
}//END constructor
public function index(){
if($this->determine_sync()){
$this->do_sync();
}else{
return FALSE;
}
}
public function determine_sync(){
if( filemtime($file) <= $this->db->last_sync() or !$this->$syncing_now){
return FALSE;
}else{
return TRUE;
}
}
public function do_sync(){
$this->db->update('syncing_now', TRUE);
//the sync code works fine..
$this->db->update('syncing_now', FALSE);
}
}
So what can i do to run the method only once and how can track down why the filetime change occurs?
Thanks all any help appreciated.
I suggest you use a Table that stores your Synchronisations.
id | md5_of_xml_file | synched_date
Now use LOCK_TABLES in order to ensure that only one process at a time may process your sync files.
Lock the synchronisations table. If locking them fails, just quit.
if (!mysqli_query('LOCK TABLES synchronisations WRITE')) {
die();//quit;
}
If an entry with the hash of the XML Sync File already exists, just quit.
$md5Hash = md5_file('yourXmlSyncFile.xml');
$result = null;
$stmt= $mysqli->prepare("SELECT md5_of_xml_file FROM synchronisations
WHERE md5_of_xml_file=?");
$stmt->bind_param("s", $md5Hash);
$stmt->execute();
$stmt->bind_result($result);
$stmt->fetch();
$stmt->close();
if ($result == $md5Hash) {
die();//quit;
}
Else, attempt to sync the file. If that works, add an entry, storing when you did this and a hash of the file used for synchronisation.
I've one page where i do a long polling i've to use at the begin of this page this
session_start();
session_write_close();
Because :
to prevent concurrent writes only one script may operate on a session at any time
So if i do not and the long polling is running the user will not be able to load another page.
So accessing to my data in session from this polling page is possible but at some point in my script i've to save my session back to the server because i made some change in it.
What's the way to do it?
That will be very nice it'll be a way to do something like
session_write_open();
//do stuff
session_write_close();
But the session_write_open() doesn't exist!
Thanks
Before you make some change to the session, call session_start again. Make the changes, and if you still do not want to exit call session_write_close once more. You can do this as many times as you like.
The previous solution will create a session ids and cookies... I wouldn't use it as is:
Session is created every time you call session_start(). If you want
to avoid multiple cookie, write better code. Multiple session_start()
especially for the same names in the same script seems like a really
bad idea.
see here : https://bugs.php.net/bug.php?id=38104
I am looking for a solution right now too and I can't find one. I agree with those who say this is a "bug".
You should be able to reopen a php session, but as you said session_write_open() does not exist...
I found a workaround in the above thread. It involves sending a header specifying manually the session id's cookie after processing the request. Luckily enough I am working with a home-brewed Front Controller that works so that no sub-controller will ever send data on its own.
In a nutshell, it works perfectly in my case. To use this you might just have to use ob_start() and ob_get_clean(). Here's the magic line:
if (SID) header('Set-Cookie: '.SID.'; path=/', true);
EDIT : see CMCDragonkai's answer below, seems good!?
All of the answers here seem to be saying to use the session methods in ways that they were clearly not intended to be used...namely calling session_start() more than once.
The PHP website offers an example SessionHandlerInterface implementation that will work just like existing sessions but without locking the file. Just implementing their example interface fixed my locking issue to allow for concurrent connections on the same session without limiting my ability to add vars to the session. To prevent some race conditions, since the app's session isn't fully stateless, I did have to make a way to save the session mid-request without closing it so that important changes could save immediately after change and less important session vars could just save at the end of the request. See the below example for usage:
Session::start();
echo("<pre>Vars Stored in Session Were:\n");print_r($_SESSION);echo("</pre>");
$_SESSION['one'] = 'one';
$_SESSION['two'] = 'two';
//save won't close session and subsequent request will show 'three'
Session::save();
$_SESSION['three'] = 'three';
If you replace that Session::start() with session_start() and Session::save() with session_write_close(), you'll notice that subsequent requests will never print out the third variable...it will be lost. However, using the SessionHandler (below), no data is lost.
The OOP implementation requires PHP 5.4+. However, you can provide individual callback methods in older versions of PHP. See docs.
namespace {
class Session implements SessionHandlerInterface {
/** #var Session */
private static $_instance;
private $savePath;
public static function start() {
if( empty(self::$_instance) ) {
self::$_instance = new self();
session_set_save_handler(self::$_instance,true);
session_start();
}
}
public static function save() {
if( empty(self::$_instance) ) {
throw new \Exception("You cannot save a session before starting the session");
}
self::$_instance->write(session_id(),session_encode());
}
public function open($savePath, $sessionName) {
$this->savePath = $savePath;
if (!is_dir($this->savePath)) {
mkdir($this->savePath, 0777);
}
return true;
}
public function close() {
return true;
}
public function read($id) {
return (string)#file_get_contents("$this->savePath/sess_$id");
}
public function write($id, $data) {
return file_put_contents("$this->savePath/sess_$id", $data) === false ? false : true;
}
public function destroy($id) {
$file = "$this->savePath/sess_$id";
if (file_exists($file)) {
unlink($file);
}
return true;
}
public function gc($maxlifetime) {
foreach (glob("$this->savePath/sess_*") as $file) {
if (filemtime($file) + $maxlifetime < time() && file_exists($file)) {
unlink($file);
}
}
return true;
}
}
The other answers here present pretty good solutions. As mentioned by #Jon, the trick is to call session_start() again before you want to make changes. Then, when you are done making changes, call session_write_close() again.
As mentioned by #Armel Larcier, the problem with this is that PHP attempts to generate new headers and will likely generate warnings (e.g. if you've already written non-header data to the client). Of course, you can simply prefix the session_start() with "#" (#session_start()), but there's a better approach.
Another Stack Overflow question, provided by #VolkerK reveals the best answer:
session_start(); // first session_start
...
session_write_close();
...
ini_set('session.use_only_cookies', false);
ini_set('session.use_cookies', false);
//ini_set('session.use_trans_sid', false); //May be necessary in some situations
ini_set('session.cache_limiter', null);
session_start(); // second session_start
This prevents PHP from attempting to send the headers again. You could even write a helper function to wrap the ini_set() functions to make this a bit more convenient:
function session_reopen() {
ini_set('session.use_only_cookies', false);
ini_set('session.use_cookies', false);
//ini_set('session.use_trans_sid', false); //May be necessary in some situations
ini_set('session.cache_limiter', null);
session_start(); //Reopen the (previously closed) session for writing.
}
Original related SO question/answer: https://stackoverflow.com/a/12315542/114558
After testing out Armel Larcier's work around. Here's my proposed solution to this problem:
ob_start();
session_start();
session_write_close();
session_start();
session_write_close();
session_start();
session_write_close();
session_start();
session_write_close();
if(SID){
$headers = array_unique(headers_list());
$cookie_strings = array();
foreach($headers as $header){
if(preg_match('/^Set-Cookie: (.+)/', $header, $matches)){
$cookie_strings[] = $matches[1];
}
}
header_remove('Set-Cookie');
foreach($cookie_strings as $cookie){
header('Set-Cookie: ' . $cookie, false);
}
}
ob_flush();
This will preserve any cookies that were created prior to working with sessions.
BTW, you may wish to register the above code as function for register_shutdown_function. Make sure to run ob_start() before the function, and ob_flush() inside the function.
A question with respect to Session Expiration in PHP.
I need my server to throw away session information if that user has been inactive for a while (for testing purposes, 5 seconds).
I've looked at this question and particular at the answer by Gumbo (+28 votes) and I've been wondering about the feasibility of this answer with respect to inactive users. On my site I already implemented this suggestion and it works fine, so long as the user requests some data at least once after the session expired. But the problem with inactive users is that they don't request new data. So the expiration code is never called.
I've been looking at session.gc_maxlife and associated parameters in my PHP.ini, but I couldn't make this work the way I wanted it to.
Any suggestions on this problem?
The session expiration logic I mentioned does already do what you’re expecting: The session cannot be used once it has expired.
That the session data is still in the storage doesn’t matter as it cannot be used after expiry; it will be removed when the garbage collector is running the next time. And that happens with a probability of session.gc_probability divided by session.gc_divisor on every session_start call (see also How long will my session last?).
Edit Since you want to perform some additional tasks on an expired session, I would rather recommend to use a custom session save handler.
When using a class for the session save handler, you could write two classes, one for the basics save handler and one with a extended garbage collector that performs the additional tasks, e.g.:
interface SessionSaveHandler {
public function open();
public function close();
public function read($id)
public function write($id, $data);
public function destroy($id);
public function gc($callback=null);
}
class SessionSaveHandler_WithAdditionalTasks implements SessionSaveHandler {
// …
public function gc($callback=null) {
if (!is_null($callback) && (!is_array($callback) || !is_callable($callback))) return false;
while (/* … */) {
if ($callback) $callback[0]::$callback[1]($id);
// destroy expired sessions
// …
}
}
public static function doAdditionalTasksOn($id) {
// additional tasks with $id
}
}
session_set_save_handler(array('SessionSaveHandler_DB_WithAdditionalTasks', 'open'),
array('SessionSaveHandler_DB_WithAdditionalTasks', 'close'),
array('SessionSaveHandler_DB_WithAdditionalTasks', 'read'),
array('SessionSaveHandler_DB_WithAdditionalTasks', 'write'),
array('SessionSaveHandler_DB_WithAdditionalTasks', 'destroy'),
array('SessionSaveHandler_DB_WithAdditionalTasks', 'gc')
);
If you need to call specific expiration logic (for example, in order to update a database) and want independence from requests then it would make sense to implement an external session handler daemon that looks at access times of session files. The daemon script should execute whatever necessary for every session file that has not been accessed for a specified time.
This solution has two prerequisites: the server's filesystem supports access times (Windows does not) and you can read files from session save path.
The garbage collector isn't called per-session, the garbage collector has a change to be called based on tge gc_* values, which can invalidate multiple sessions. So as long as someone triggers it, other people can be logged out. If you need a more reliable method, if your timeout is in minutes use a cronjob, if your timeout is in seconds you'll have to use some kind of daemon process.
One way you can do this is using javascript to refresh the page a little after the timeout period. Granted your users will have to have Javascript enabled for this to work. You can also add extra features like having javascript pop up a timeout notice with a count down, etc. So essential what happens the session is expired due to your settings, then the refresh hits, clean up runs and your done.
<html>
<head>
<script type="text/JavaScript">
<!--
function timedRefresh(timeoutPeriod) {
setTimeout("location.reload(true);",timeoutPeriod);
}
// -->
</script>
</head>
<body onload="JavaScript:timedRefresh(5000);">
</body>
</html>
Storing sessions in disk very slow and painful for me. I'm having very high traffic. I want to store session in Advanced PHP Cache, how can I do this?
<?php
// to enable paste this line right before session_start():
// new Session_APC;
class Session_APC
{
protected $_prefix;
protected $_ttl;
protected $_lockTimeout = 10; // if empty, no session locking, otherwise seconds to lock timeout
public function __construct($params=array())
{
$def = session_get_cookie_params();
$this->_ttl = $def['lifetime'];
if (isset($params['ttl'])) {
$this->_ttl = $params['ttl'];
}
if (isset($params['lock_timeout'])) {
$this->_lockTimeout = $params['lock_timeout'];
}
session_set_save_handler(
array($this, 'open'), array($this, 'close'),
array($this, 'read'), array($this, 'write'),
array($this, 'destroy'), array($this, 'gc')
);
}
public function open($savePath, $sessionName)
{
$this->_prefix = 'BSession/'.$sessionName;
if (!apc_exists($this->_prefix.'/TS')) {
// creating non-empty array #see http://us.php.net/manual/en/function.apc-store.php#107359
apc_store($this->_prefix.'/TS', array(''));
apc_store($this->_prefix.'/LOCK', array(''));
}
return true;
}
public function close()
{
return true;
}
public function read($id)
{
$key = $this->_prefix.'/'.$id;
if (!apc_exists($key)) {
return ''; // no session
}
// redundant check for ttl before read
if ($this->_ttl) {
$ts = apc_fetch($this->_prefix.'/TS');
if (empty($ts[$id])) {
return ''; // no session
} elseif (!empty($ts[$id]) && $ts[$id] + $this->_ttl < time()) {
unset($ts[$id]);
apc_delete($key);
apc_store($this->_prefix.'/TS', $ts);
return ''; // session expired
}
}
if (!$this->_lockTimeout) {
$locks = apc_fetch($this->_prefix.'/LOCK');
if (!empty($locks[$id])) {
while (!empty($locks[$id]) && $locks[$id] + $this->_lockTimeout >= time()) {
usleep(10000); // sleep 10ms
$locks = apc_fetch($this->_prefix.'/LOCK');
}
}
/*
// by default will overwrite session after lock expired to allow smooth site function
// alternative handling is to abort current process
if (!empty($locks[$id])) {
return false; // abort read of waiting for lock timed out
}
*/
$locks[$id] = time(); // set session lock
apc_store($this->_prefix.'/LOCK', $locks);
}
return apc_fetch($key); // if no data returns empty string per doc
}
public function write($id, $data)
{
$ts = apc_fetch($this->_prefix.'/TS');
$ts[$id] = time();
apc_store($this->_prefix.'/TS', $ts);
$locks = apc_fetch($this->_prefix.'/LOCK');
unset($locks[$id]);
apc_store($this->_prefix.'/LOCK', $locks);
return apc_store($this->_prefix.'/'.$id, $data, $this->_ttl);
}
public function destroy($id)
{
$ts = apc_fetch($this->_prefix.'/TS');
unset($ts[$id]);
apc_store($this->_prefix.'/TS', $ts);
$locks = apc_fetch($this->_prefix.'/LOCK');
unset($locks[$id]);
apc_store($this->_prefix.'/LOCK', $locks);
return apc_delete($this->_prefix.'/'.$id);
}
public function gc($lifetime)
{
if ($this->_ttl) {
$lifetime = min($lifetime, $this->_ttl);
}
$ts = apc_fetch($this->_prefix.'/TS');
foreach ($ts as $id=>$time) {
if ($time + $lifetime < time()) {
apc_delete($this->_prefix.'/'.$id);
unset($ts[$id]);
}
}
return apc_store($this->_prefix.'/TS', $ts);
}
}
I tried to lure better answers by offering 100 points as a bounty, but none of the answers were really satisfying.
I would aggregate the recommended solutions like this:
Using APC as a session storage
APC cannot really be used as a session store, because there is no mechanism available to APC that allows proper locking, But this locking is essential to ensure nobody alters the initially read session data before writing it back.
Bottom line: Avoid it, it won't work.
Alternatives
A number of session handlers might be available. Check the output of phpinfo() at the Session section for "Registered save handlers".
File storage on RAM disk
Works out-of-the-box, but needs a file system mounted as RAM disk for obvious reasons.
Shared memory (mm)
Is available when PHP is compiled with mm enabled. This is builtin on windows.
Memcache(d)
PHP comes with a dedicated session save handler for this. Requires installed memcache server and PHP client. Depending on which of the two memcache extensions is installed, the save handler is either called memcache or memcached.
In theory, you ought to be able to write a custom session handler which uses APC to do this transparently for you. However, I haven't actually been able to find anything really promising in a quick five-minute search; most people seem to be using APC for the bytecode cache and putting their sessions in memcached.
Simply putting your /tmp disk (or, wherever PHP session files are stored) onto a RAM disk such as tmpfs or ramfs would also have serious performance gains, and would be a much more transparent switch, with zero code changes.
The performance gain may be significantly less, but it will still be significantly faster than on-disk sessions.
Store it in cookies (encrypted) or MongoDB. APC isn't really intended for that purpose.
You can store your session data within PHP internals shared memory.
session.save_handler = mm
But it needs to be available: http://php.net/manual/en/session.installation.php
Another good solution is to store PHP sessions in memcached
session.save_handler = memcache
Explicit Session Closing immediately following Session Starting, Opening and Writing should solve the locking problem in Unirgy's Answer(where session access is always cyclic(start/open-write-close). I also Imagine a Second class - APC_journaling or something similar used in conjunction with Sessions would be ultimately better.... A session starts and is written to with a unique external Id assigned to each session, that session is closed, and a journal (array in apc cache via _store & _add) is opened/created for any other writes intended to go to session which can then be read, validated and written to the session(identified by that unique id!) in apc at the next convenient opportunity.
I found a good blog post Explaining that the Locking havoc Sven refers to comes from the Session blocking until it's closed or script execution ends. The session being immediately closed doesn't prevent reading just writing.
http://konrness.com/php5/how-to-prevent-blocking-php-requests - link to the blog post.
Hope this helps.
Caching external data in PHP
Tutorial Link - http://www.gayadesign.com/diy/caching-external-data-in-php/
How to Use APC Caching with PHP
Tutorial Link - http://www.script-tutorials.com/how-to-use-apc-caching-with-php/