Related
I'm writing a temporary file by running a couple of external Unix tools over a PDF file (basically I'm using QPDF and sed to alter the colour values. Don't ask.):
// Uncompress PDF using QPDF (doesn't read from stdin, so needs tempfile.)
$compressed_file_path = tempnam(sys_get_temp_dir(), 'cruciverbal');
file_put_contents($compressed_file_path, $response->getBody());
$uncompressed_file_path = tempnam(sys_get_temp_dir(), 'cruciverbal');
$command = "qpdf --qdf --object-streams=disable '$compressed_file_path' '$uncompressed_file_path'";
exec($command, $output, $return_value);
// Run through sed (could do this bit with streaming stdin/stdout)
$fixed_file_path = tempnam(sys_get_temp_dir(), 'cruciverbal');
$command = "sed s/0.298039215/0.0/g < '$uncompressed_file_path' > '$fixed_file_path'";
exec($command, $output, $return_value);
So, when this is done I'm left with a temporary file on disk at $fixed_file_path. (NB: While I could do the whole sed process streamed in-memory without a tempfile, the QPDF utility requires an actual file as input, for good reasons.)
In my existing process, I then read the whole $fixed_file_path file in as a string, delete it, and hand the string off to another class to go do things with.
I'd now like to change that last part to using a PSR-7 stream, i.e. a \Guzzle\Psr7\Stream object. I figure it'll be more memory-efficient (I might have a few of these in the air at once) and it'll need to be a stream in the end.
However, I'm not sure then how I'd delete the temporary file when the (third-party) class I'd handed the stream off to is finished with it. Is there a method of saying "...and delete that when you're finished with it"? Or auto-cleaning my temporary files in some other way, without keeping track of them manually?
I'd been vaguely considering rolling my own SelfDestructingFileStream, but that seemed like overkill and I thought I might be missing something.
Sounds like what you want is something like this:
<?php
class TempFile implements \Psr\Http\Message\StreamInterface {
private $resource;
public function __construct() {
$this->resource = tmpfile();
}
public function __destruct() {
$this->close();
}
public function getFilename() {
return $this->getMetadata('uri');
}
public function getMetadata($key = null) {
$data = stream_get_meta_data($this->resource);
if($key) {
return $data[$key];
}
return $data;
}
public function close() {
fclose($this->resource);
}
// TODO: implement methods from https://github.com/php-fig/http-message/blob/master/src/StreamInterface.php
}
Have QPDF write to $tmpFile->getFilename() and then you can pass the whole object off to your Guzzle/POST since it's PSR-7 compliant and then the file will delete itself when it goes out of scope.
$p = new Pool(10);
for ($i = 0; i<1000; i++){
$tasks[i] = new workerThread($i);
}
foreach ($tasks as $task) {
$p->submit($task);
}
// shutdown will wait for current queue to be completed
$p->shutdown();
// garbage collection check / read results
$p->collect(function($checkingTask){
return ($checkingTask->isGarbage);
});
class workerThread extends Collectable {
public function __construct($i){
$this->i= $i;
}
public function run(){
echo $this->i;
ob_flush();
flush();
}
}
The code above is a simple example that would cause crash. I'm trying to update the page real-time by putting ob_flush();and flush(); in the Threaded Object, and it mostly works as expected. So the code above is not guaranteed to crash every time, but if you run it a couple times more, sometimes the script stops and Apache restarts with an error message "httpd.exe Application error The instruction at "0x006fb17f" referenced memory at "0x028a1e20". The memory could not be "Written". Click on OK ."
I think it's caused by flushing conflict of multiple threads when they try to flush about the same time? What can I do to work around it and flush as there's any new output.
Multiple threads should not write standard output, there is no safe way to do this.
Zend provides no facility to make it safe, it works by coincidence, and will always be unsafe.
We have a PHP application, and were thinking it might be advantageous to have the application know if there was a change in its makeup since the last execution. Mainly due to managing caches and such, and knowing that our applications are sometimes accessed by people who don't remember to clear the cache on changes. (Changing the people is the obvious answer, but alas, not really achievable)
We've come up with this, which is the fastest we've managed to eke out, running an average 0.08 on a developer machine for a typical project. We've experimented with shasum,md5 and crc32, and this is the fastest. We are basically md5ing the contents of every file, and md5'ing that output. Security isnt a concern, we're just interested in detecting filesystem changes via a differing checksum.
time (find application/ -path '*/.svn' -prune -o -type f -print0 | xargs -0 md5 | md5)
I suppose the question is, can this be optimised any further?
(I realise that pruning svn will have a cost, but find takes the least amount of time out of the components, so it will be pretty minimal. We're testing this on a working copy atm)
You can be notified of filesystem modifications using the inotify extension.
It can be installed with pecl:
pecl install inotify
Or manually (download, phpize && ./configure && make && make install as usual).
This is a raw binding over the linux inotify syscalls, and is probably the fastest solution on linux.
See this example of a simple tail implementation: http://svn.php.net/viewvc/pecl/inotify/trunk/tail.php?revision=262896&view=markup
If you want a higher level library, or suppot for non-linux systems, take a look at Lurker.
It works on any system, and can use inotity when it's available.
See the example from the README:
$watcher = new ResourceWatcher;
$watcher->track('an arbitrary id', '/path/to/views');
$watcher->addListener('an arbitrary id', function (FilesystemEvent $event) {
echo $event->getResource() . 'was' . $event->getTypeString();
});
$watcher->start();
Instead of going by file contents, you can use the same technique with filename and timestamps:
find . -name '.svn' -prune -o -type f -printf '%m%c%p' | md5sum
This is much faster than reading and hashing the contents of each file.
Insteading of actively searching for changes, why not getting notified when something changes. Have a look at PHP's FAM - File Alteration Monitor API
FAM monitors files and directories, notifying interested applications of changes. More information about FAM is available at » http://oss.sgi.com/projects/fam/. A PHP script may specify a list of files for FAM to monitor using the functions provided by this extension. The FAM process is started when the first connection from any application to it is opened. It exits after all connections to it have been closed.
CAVEAT: requires an additional daemon on the machine and the PECL extension is unmaintained.
We didn't want to use FAM, since we would need to install it on the server, and thats not always possible. Sometimes clients are insistent we deploy on their broken infrastructure. Since it's discontinued, its hard to get that change approved red tape wise also.
The only way to improve the speed of the original version in the question is to make sure your file list is as succinct as possible. IE only hash the directories/files that really matter if changed. Cutting out directories that aren't relevant can give big speed boosts.
Past that, the application was using the function to check if there were changes in order to perform a cache clear if there were. Since we don't really want to halt the application while its doing this, this sort of thing is best farmed out carefully as an asynchronous process using fsockopen. That gives the best 'speed boost' overall, just be careful of race conditions.
Marking this as the 'answer' and upvoting the FAM answer.
since you have svn, why don't you go by revisions. i realise you are skipping svn folders but i suppose you did that for speed advantage and that you do not have modified files in your production servers...
that beeing said, you do not have to re invent the wheel.
you could speed up the process by only looking at metadata read from the directory indexes (modification timestamp, filesize, etc)
you could also stop once you spotted a difference (should theoretically reduce the time by half in average) etc. there is a lot.
i honestly think the best way in this case is to just use the tools already available.
the linux tool diff has a -q option (quick).
you will need to use it with the recursive parameter -r as well.
diff -r -q dir1/ dir2/
it uses a lot of optimisations and i seriously doubt you can significantly improve upon it without considerable effort.
Definitely what you should be using is Inotify its fast and easy to configure, multiple options directly from bash or php of dedicate a simple node-inotify instance for this task
But Inotify does not worn on windows but you can easy write a command line application with FileSystemWatcher or FindFirstChangeNotification and call via exec
If you are looking for only PHP solution then its pretty difficult and you might not get the performance want because the only way is to scan that folder continuously
Here is a Simple Experiment
Don't use in production
Can not manage large file set
Does not support file monitoring
Only Support NEW , DELETED and MODIFIED
Does not support Recursion
Example
if (php_sapi_name() !== 'cli')
die("CLI ONLY");
date_default_timezone_set("America/Los_Angeles");
$sm = new Monitor(__DIR__ . "/test");
// Add hook when new files are created
$sm->hook(function ($file) {
// Send a mail or log to a file
printf("#EMAIL NEW FILE %s\n", $file);
}, Monitor::NOTIFY_NEW);
// Add hook when files are Modified
$sm->hook(function ($file) {
// Do monthing meaningful like calling curl
printf("#HTTP POST MODIFIED FILE %s\n", $file);
}, Monitor::NOTIFY_MODIFIED);
// Add hook when files are Deleted
$sm->hook(function ($file) {
// Crazy ... Send SMS fast or IVR the Boss that you messed up
printf("#SMS DELETED FILE %s\n", $file);
}, Monitor::NOTIFY_DELETED);
// Start Monitor
$sm->start();
Cache Used
// Simpe Cache
// Can be replaced with Memcache
class Cache {
public $f;
function __construct() {
$this->f = fopen("php://temp", "rw+");
}
function get($k) {
rewind($this->f);
return json_decode(stream_get_contents($this->f), true);
}
function set($k, $data) {
fseek($this->f, 0);
fwrite($this->f, json_encode($data));
return $k;
}
function run() {
}
}
The Experiment Class
// The Experiment
class Monitor {
private $dir;
private $info;
private $timeout = 1; // sec
private $timeoutStat = 60; // sec
private $cache;
private $current, $stable, $hook = array();
const NOTIFY_NEW = 1;
const NOTIFY_MODIFIED = 2;
const NOTIFY_DELETED = 4;
const NOTIFY_ALL = 7;
function __construct($dir) {
$this->cache = new Cache();
$this->dir = $dir;
$this->info = new SplFileInfo($this->dir);
$this->scan(true);
}
public function start() {
$i = 0;
$this->stable = (array) $this->cache->get(md5($this->dir));
while(true) {
// Clear System Cache at Intervals
if ($i % $this->timeoutStat == 0) {
clearstatcache();
}
$this->scan(false);
if ($this->stable != $this->current) {
$this->cache->set(md5($this->dir), $this->current);
$this->stable = $this->current;
}
sleep($this->timeout);
$i ++;
// printf("Memory Usage : %0.4f \n", memory_get_peak_usage(false) /
// 1024);
}
}
private function scan($new = false) {
$rdi = new FilesystemIterator($this->dir, FilesystemIterator::SKIP_DOTS);
$this->current = array();
foreach($rdi as $file) {
// Skip files that are not redable
if (! $file->isReadable())
return false;
$path = addslashes($file->getRealPath());
$keyHash = md5($path);
$fileHash = $file->isFile() ? md5_file($path) : "#";
$hash["t"] = $file->getMTime();
$hash["h"] = $fileHash;
$hash["f"] = $path;
$this->current[$keyHash] = json_encode($hash);
}
if ($new === false) {
$this->process();
}
}
public function hook(Callable $call, $type = Monitor::NOTIFY_ALL) {
$this->hook[$type][] = $call;
}
private function process() {
if (isset($this->hook[self::NOTIFY_NEW])) {
$diff = array_flip(array_diff(array_keys($this->current), array_keys($this->stable)));
$this->notify(array_intersect_key($this->current, $diff), self::NOTIFY_NEW);
unset($diff);
}
if (isset($this->hook[self::NOTIFY_DELETED])) {
$deleted = array_flip(array_diff(array_keys($this->stable), array_keys($this->current)));
$this->notify(array_intersect_key($this->stable, $deleted), self::NOTIFY_DELETED);
}
if (isset($this->hook[self::NOTIFY_MODIFIED])) {
$this->notify(array_diff_assoc(array_intersect_key($this->stable, $this->current), array_intersect_key($this->current, $this->stable)), self::NOTIFY_MODIFIED);
}
}
private function notify(array $files, $type) {
if (empty($files))
return;
foreach($this->hook as $t => $hooks) {
if ($t & $type) {
foreach($hooks as $hook) {
foreach($files as $file) {
$info = json_decode($file, true);
$hook($info['f'], $type);
}
}
}
}
}
}
We have a PHP application, and were thinking it might be advantageous to have the application know if there was a change in its makeup since the last execution. Mainly due to managing caches and such, and knowing that our applications are sometimes accessed by people who don't remember to clear the cache on changes. (Changing the people is the obvious answer, but alas, not really achievable)
We've come up with this, which is the fastest we've managed to eke out, running an average 0.08 on a developer machine for a typical project. We've experimented with shasum,md5 and crc32, and this is the fastest. We are basically md5ing the contents of every file, and md5'ing that output. Security isnt a concern, we're just interested in detecting filesystem changes via a differing checksum.
time (find application/ -path '*/.svn' -prune -o -type f -print0 | xargs -0 md5 | md5)
I suppose the question is, can this be optimised any further?
(I realise that pruning svn will have a cost, but find takes the least amount of time out of the components, so it will be pretty minimal. We're testing this on a working copy atm)
You can be notified of filesystem modifications using the inotify extension.
It can be installed with pecl:
pecl install inotify
Or manually (download, phpize && ./configure && make && make install as usual).
This is a raw binding over the linux inotify syscalls, and is probably the fastest solution on linux.
See this example of a simple tail implementation: http://svn.php.net/viewvc/pecl/inotify/trunk/tail.php?revision=262896&view=markup
If you want a higher level library, or suppot for non-linux systems, take a look at Lurker.
It works on any system, and can use inotity when it's available.
See the example from the README:
$watcher = new ResourceWatcher;
$watcher->track('an arbitrary id', '/path/to/views');
$watcher->addListener('an arbitrary id', function (FilesystemEvent $event) {
echo $event->getResource() . 'was' . $event->getTypeString();
});
$watcher->start();
Instead of going by file contents, you can use the same technique with filename and timestamps:
find . -name '.svn' -prune -o -type f -printf '%m%c%p' | md5sum
This is much faster than reading and hashing the contents of each file.
Insteading of actively searching for changes, why not getting notified when something changes. Have a look at PHP's FAM - File Alteration Monitor API
FAM monitors files and directories, notifying interested applications of changes. More information about FAM is available at » http://oss.sgi.com/projects/fam/. A PHP script may specify a list of files for FAM to monitor using the functions provided by this extension. The FAM process is started when the first connection from any application to it is opened. It exits after all connections to it have been closed.
CAVEAT: requires an additional daemon on the machine and the PECL extension is unmaintained.
We didn't want to use FAM, since we would need to install it on the server, and thats not always possible. Sometimes clients are insistent we deploy on their broken infrastructure. Since it's discontinued, its hard to get that change approved red tape wise also.
The only way to improve the speed of the original version in the question is to make sure your file list is as succinct as possible. IE only hash the directories/files that really matter if changed. Cutting out directories that aren't relevant can give big speed boosts.
Past that, the application was using the function to check if there were changes in order to perform a cache clear if there were. Since we don't really want to halt the application while its doing this, this sort of thing is best farmed out carefully as an asynchronous process using fsockopen. That gives the best 'speed boost' overall, just be careful of race conditions.
Marking this as the 'answer' and upvoting the FAM answer.
since you have svn, why don't you go by revisions. i realise you are skipping svn folders but i suppose you did that for speed advantage and that you do not have modified files in your production servers...
that beeing said, you do not have to re invent the wheel.
you could speed up the process by only looking at metadata read from the directory indexes (modification timestamp, filesize, etc)
you could also stop once you spotted a difference (should theoretically reduce the time by half in average) etc. there is a lot.
i honestly think the best way in this case is to just use the tools already available.
the linux tool diff has a -q option (quick).
you will need to use it with the recursive parameter -r as well.
diff -r -q dir1/ dir2/
it uses a lot of optimisations and i seriously doubt you can significantly improve upon it without considerable effort.
Definitely what you should be using is Inotify its fast and easy to configure, multiple options directly from bash or php of dedicate a simple node-inotify instance for this task
But Inotify does not worn on windows but you can easy write a command line application with FileSystemWatcher or FindFirstChangeNotification and call via exec
If you are looking for only PHP solution then its pretty difficult and you might not get the performance want because the only way is to scan that folder continuously
Here is a Simple Experiment
Don't use in production
Can not manage large file set
Does not support file monitoring
Only Support NEW , DELETED and MODIFIED
Does not support Recursion
Example
if (php_sapi_name() !== 'cli')
die("CLI ONLY");
date_default_timezone_set("America/Los_Angeles");
$sm = new Monitor(__DIR__ . "/test");
// Add hook when new files are created
$sm->hook(function ($file) {
// Send a mail or log to a file
printf("#EMAIL NEW FILE %s\n", $file);
}, Monitor::NOTIFY_NEW);
// Add hook when files are Modified
$sm->hook(function ($file) {
// Do monthing meaningful like calling curl
printf("#HTTP POST MODIFIED FILE %s\n", $file);
}, Monitor::NOTIFY_MODIFIED);
// Add hook when files are Deleted
$sm->hook(function ($file) {
// Crazy ... Send SMS fast or IVR the Boss that you messed up
printf("#SMS DELETED FILE %s\n", $file);
}, Monitor::NOTIFY_DELETED);
// Start Monitor
$sm->start();
Cache Used
// Simpe Cache
// Can be replaced with Memcache
class Cache {
public $f;
function __construct() {
$this->f = fopen("php://temp", "rw+");
}
function get($k) {
rewind($this->f);
return json_decode(stream_get_contents($this->f), true);
}
function set($k, $data) {
fseek($this->f, 0);
fwrite($this->f, json_encode($data));
return $k;
}
function run() {
}
}
The Experiment Class
// The Experiment
class Monitor {
private $dir;
private $info;
private $timeout = 1; // sec
private $timeoutStat = 60; // sec
private $cache;
private $current, $stable, $hook = array();
const NOTIFY_NEW = 1;
const NOTIFY_MODIFIED = 2;
const NOTIFY_DELETED = 4;
const NOTIFY_ALL = 7;
function __construct($dir) {
$this->cache = new Cache();
$this->dir = $dir;
$this->info = new SplFileInfo($this->dir);
$this->scan(true);
}
public function start() {
$i = 0;
$this->stable = (array) $this->cache->get(md5($this->dir));
while(true) {
// Clear System Cache at Intervals
if ($i % $this->timeoutStat == 0) {
clearstatcache();
}
$this->scan(false);
if ($this->stable != $this->current) {
$this->cache->set(md5($this->dir), $this->current);
$this->stable = $this->current;
}
sleep($this->timeout);
$i ++;
// printf("Memory Usage : %0.4f \n", memory_get_peak_usage(false) /
// 1024);
}
}
private function scan($new = false) {
$rdi = new FilesystemIterator($this->dir, FilesystemIterator::SKIP_DOTS);
$this->current = array();
foreach($rdi as $file) {
// Skip files that are not redable
if (! $file->isReadable())
return false;
$path = addslashes($file->getRealPath());
$keyHash = md5($path);
$fileHash = $file->isFile() ? md5_file($path) : "#";
$hash["t"] = $file->getMTime();
$hash["h"] = $fileHash;
$hash["f"] = $path;
$this->current[$keyHash] = json_encode($hash);
}
if ($new === false) {
$this->process();
}
}
public function hook(Callable $call, $type = Monitor::NOTIFY_ALL) {
$this->hook[$type][] = $call;
}
private function process() {
if (isset($this->hook[self::NOTIFY_NEW])) {
$diff = array_flip(array_diff(array_keys($this->current), array_keys($this->stable)));
$this->notify(array_intersect_key($this->current, $diff), self::NOTIFY_NEW);
unset($diff);
}
if (isset($this->hook[self::NOTIFY_DELETED])) {
$deleted = array_flip(array_diff(array_keys($this->stable), array_keys($this->current)));
$this->notify(array_intersect_key($this->stable, $deleted), self::NOTIFY_DELETED);
}
if (isset($this->hook[self::NOTIFY_MODIFIED])) {
$this->notify(array_diff_assoc(array_intersect_key($this->stable, $this->current), array_intersect_key($this->current, $this->stable)), self::NOTIFY_MODIFIED);
}
}
private function notify(array $files, $type) {
if (empty($files))
return;
foreach($this->hook as $t => $hooks) {
if ($t & $type) {
foreach($hooks as $hook) {
foreach($files as $file) {
$info = json_decode($file, true);
$hook($info['f'], $type);
}
}
}
}
}
}
Storing sessions in disk very slow and painful for me. I'm having very high traffic. I want to store session in Advanced PHP Cache, how can I do this?
<?php
// to enable paste this line right before session_start():
// new Session_APC;
class Session_APC
{
protected $_prefix;
protected $_ttl;
protected $_lockTimeout = 10; // if empty, no session locking, otherwise seconds to lock timeout
public function __construct($params=array())
{
$def = session_get_cookie_params();
$this->_ttl = $def['lifetime'];
if (isset($params['ttl'])) {
$this->_ttl = $params['ttl'];
}
if (isset($params['lock_timeout'])) {
$this->_lockTimeout = $params['lock_timeout'];
}
session_set_save_handler(
array($this, 'open'), array($this, 'close'),
array($this, 'read'), array($this, 'write'),
array($this, 'destroy'), array($this, 'gc')
);
}
public function open($savePath, $sessionName)
{
$this->_prefix = 'BSession/'.$sessionName;
if (!apc_exists($this->_prefix.'/TS')) {
// creating non-empty array #see http://us.php.net/manual/en/function.apc-store.php#107359
apc_store($this->_prefix.'/TS', array(''));
apc_store($this->_prefix.'/LOCK', array(''));
}
return true;
}
public function close()
{
return true;
}
public function read($id)
{
$key = $this->_prefix.'/'.$id;
if (!apc_exists($key)) {
return ''; // no session
}
// redundant check for ttl before read
if ($this->_ttl) {
$ts = apc_fetch($this->_prefix.'/TS');
if (empty($ts[$id])) {
return ''; // no session
} elseif (!empty($ts[$id]) && $ts[$id] + $this->_ttl < time()) {
unset($ts[$id]);
apc_delete($key);
apc_store($this->_prefix.'/TS', $ts);
return ''; // session expired
}
}
if (!$this->_lockTimeout) {
$locks = apc_fetch($this->_prefix.'/LOCK');
if (!empty($locks[$id])) {
while (!empty($locks[$id]) && $locks[$id] + $this->_lockTimeout >= time()) {
usleep(10000); // sleep 10ms
$locks = apc_fetch($this->_prefix.'/LOCK');
}
}
/*
// by default will overwrite session after lock expired to allow smooth site function
// alternative handling is to abort current process
if (!empty($locks[$id])) {
return false; // abort read of waiting for lock timed out
}
*/
$locks[$id] = time(); // set session lock
apc_store($this->_prefix.'/LOCK', $locks);
}
return apc_fetch($key); // if no data returns empty string per doc
}
public function write($id, $data)
{
$ts = apc_fetch($this->_prefix.'/TS');
$ts[$id] = time();
apc_store($this->_prefix.'/TS', $ts);
$locks = apc_fetch($this->_prefix.'/LOCK');
unset($locks[$id]);
apc_store($this->_prefix.'/LOCK', $locks);
return apc_store($this->_prefix.'/'.$id, $data, $this->_ttl);
}
public function destroy($id)
{
$ts = apc_fetch($this->_prefix.'/TS');
unset($ts[$id]);
apc_store($this->_prefix.'/TS', $ts);
$locks = apc_fetch($this->_prefix.'/LOCK');
unset($locks[$id]);
apc_store($this->_prefix.'/LOCK', $locks);
return apc_delete($this->_prefix.'/'.$id);
}
public function gc($lifetime)
{
if ($this->_ttl) {
$lifetime = min($lifetime, $this->_ttl);
}
$ts = apc_fetch($this->_prefix.'/TS');
foreach ($ts as $id=>$time) {
if ($time + $lifetime < time()) {
apc_delete($this->_prefix.'/'.$id);
unset($ts[$id]);
}
}
return apc_store($this->_prefix.'/TS', $ts);
}
}
I tried to lure better answers by offering 100 points as a bounty, but none of the answers were really satisfying.
I would aggregate the recommended solutions like this:
Using APC as a session storage
APC cannot really be used as a session store, because there is no mechanism available to APC that allows proper locking, But this locking is essential to ensure nobody alters the initially read session data before writing it back.
Bottom line: Avoid it, it won't work.
Alternatives
A number of session handlers might be available. Check the output of phpinfo() at the Session section for "Registered save handlers".
File storage on RAM disk
Works out-of-the-box, but needs a file system mounted as RAM disk for obvious reasons.
Shared memory (mm)
Is available when PHP is compiled with mm enabled. This is builtin on windows.
Memcache(d)
PHP comes with a dedicated session save handler for this. Requires installed memcache server and PHP client. Depending on which of the two memcache extensions is installed, the save handler is either called memcache or memcached.
In theory, you ought to be able to write a custom session handler which uses APC to do this transparently for you. However, I haven't actually been able to find anything really promising in a quick five-minute search; most people seem to be using APC for the bytecode cache and putting their sessions in memcached.
Simply putting your /tmp disk (or, wherever PHP session files are stored) onto a RAM disk such as tmpfs or ramfs would also have serious performance gains, and would be a much more transparent switch, with zero code changes.
The performance gain may be significantly less, but it will still be significantly faster than on-disk sessions.
Store it in cookies (encrypted) or MongoDB. APC isn't really intended for that purpose.
You can store your session data within PHP internals shared memory.
session.save_handler = mm
But it needs to be available: http://php.net/manual/en/session.installation.php
Another good solution is to store PHP sessions in memcached
session.save_handler = memcache
Explicit Session Closing immediately following Session Starting, Opening and Writing should solve the locking problem in Unirgy's Answer(where session access is always cyclic(start/open-write-close). I also Imagine a Second class - APC_journaling or something similar used in conjunction with Sessions would be ultimately better.... A session starts and is written to with a unique external Id assigned to each session, that session is closed, and a journal (array in apc cache via _store & _add) is opened/created for any other writes intended to go to session which can then be read, validated and written to the session(identified by that unique id!) in apc at the next convenient opportunity.
I found a good blog post Explaining that the Locking havoc Sven refers to comes from the Session blocking until it's closed or script execution ends. The session being immediately closed doesn't prevent reading just writing.
http://konrness.com/php5/how-to-prevent-blocking-php-requests - link to the blog post.
Hope this helps.
Caching external data in PHP
Tutorial Link - http://www.gayadesign.com/diy/caching-external-data-in-php/
How to Use APC Caching with PHP
Tutorial Link - http://www.script-tutorials.com/how-to-use-apc-caching-with-php/