redis vs native sessions - php

I am using sessions in PHP to track if a user is logged in. I do not use it to store any other data about the user; essentially it is like checking a hash table to see if the user has authenticated.
Would there be some advantage to using redis instead of native PHP sessions?
I'm curious about performance, scalability, and security (not really concerned with code complexity).

Using something like Redis for storing sessions is a great way to get more performance out of load balanced servers.
For example on Amazon Web Services, the load balancers have what's called 'sticky sessions'. What this means is that when a user first connects to your web app, e.g. when logging in to it, the load balancer will choose one of your app servers and this user will continue to be served from this server until they exit your application. This is because the sessions used by PHP, for example, will be stored on the app server that they first start using.
Now, if you use Redis on a separate server, then configure your PHP on each of your app servers to store it's sessions in Redis, you can turn this 'sticky sessions' off. This would mean that any of your servers can access the sessions and, therefore, the user be served from a different server with every request to your app. This ultimately makes for more efficient use of your load balancing set-up.

You want the session save handler to be fast. This is due to the fact that a PHP session will block all other concurrent requests from the same user until the first request is finished.
There are a variety of handlers you could use for PHP sessions across multiple servers: File w/ NFS, MySQL Database, Memcache, and Redis.
The database method (using InnoDB) was the slowest in my experience followed by File w/ NFS. Locking and write contention are the main factors. Memcache and Redis provide similar performance and are by far the better alternatives since all operations are in RAM. Redis is my choice because you can enable disk persistence, and Memcache is only memory based.
I explain Redis Sessions in PHP with Kohana if you want more detail. Here is our dashboard for managing Redis keys:

I don't really think you need to worry much about sessions unless you get MASSIVE ammounts of traffic, PHP handle sessions nicely, and if you store only that little data, it should be fine even with a lot of requests, and about performance it should be close, as redis is not native to PHP.
With 10k users, if each user uses like 1kb data of sessions, it would consume 10,000kb or 10~mb, which is not much; PHP is smart enough to use a good enough data structure to hold and quickly write and read those values. The problem is if the session data is too big, or for some reason the server consumes too many resources reading the session data, but that's normally if it's the data is too big.

Related

How To get all the session values stores at the server using php

I want to use session values in my script which are stored at the server using php can any one kindly explain the process to achieve this.
I want to build a chat app for this am planning to use those session values.
Assume usera and userb are logged in and their userid is sessioned based on this scenario i want to do a chat app.
Now i had done the app but i had used setinterval function of Javascript and am calling the chats i want to avoid the database hits on every 3 mill sec.
Kindly Help me out
Thanks In Advance
You're basically attempting to use PHP session files as a file cache.
Instead, you should use an object caching system such as Memcached or Redis. If memory caching isn't an option (shared hosting, etc), then you could implement your own file cache (or you could use something like PHPFastCache, which supports file caching).
Note: File caching for a chat app may or may not speed up your application. It depends on how you implement it and a number of other factors.
Hi put the session value in input box,
<input type='hidden' id='session_value' value='<?php $_SESSION['value']?>'>
Using the id fetch the session value in script,
<script>
var session_value = document.getElementById ( "session_value" );
</script>
3ms is insanely short delay to run a polling chat system. I suggest increasing it to at least 200ms but preferably around 1000ms.
$_SESSION values are per user and not recommended for viewing a chat stream for a number of reasons. Instead it sounds like you are looking more to just update the chat feed.
The database unless it is hosted on another server and $_SESSION will be the equivalent, since the database is effectively files as well. The database will actually generally be faster than reading raw file storage since Queries are normally cached and Indexing helps lookup records quicker. In addition you won't have to worry about concurrent connections to the files either.
If anything enable OPCache and install APCu for your PHP installation, to help aid the serving of requests. OPCache will cache your compiled OP code into memory so that subsequent requests to the file won't need to be recompiled.
APCu will act as your file cache, again storing your rendered data in memory.
Additionally many Database Frameworks such as Doctrine can also utilize APC caching for query and result caching.
Instead of using a InnoDB or MyISAM storage engines for your chat messages I suggest trying the MEMORY storage engine.
So instead of accessing the File System I/O your database would instead be utilizing the Memory I/O. The general concept is few writes, many reads. Since one person writes to the database, requires everyone to read the data. Just keep in mind that the Memory storage engine is temporary and is lost if the server restarts or power is lost.
For more information see: https://dev.mysql.com/doc/refman/5.6/en/memory-storage-engine.html
Overall if you are able, I would suggest look at using Socket IO (Websockets) instead of either database or file based caching. This puts the load on the clients instead of the server, and everything occurs in real-time instead of polling for changes.
For some examples see:
Ratchet http://socketo.me/
React http://reactphp.org/
Node.js http://tutorialzine.com/2014/03/nodejs-private-webchat/

How to manage sessions on common database for multiple servers in PHP? [duplicate]

Hi I have to retrieve data from several web servers. First I login as a user to my web site. After successfull login I have to fetch data from different web servers and display. How can I share a single session with multiple servers. How can I achieve this?
When I first login it create session and session id saved on temp folder of that server. When I try to access another server how can I use current session that already created when I logged in. Can anybody suggest a solution?
You'll have to use another session handler.
You can:
build your own (see session_set_save_handler) or
use extensions that provide their own session handler, like memcached
In complement to all these answers:
If you store sessions in databases, check that garbage collecting of sessions in PHP is really activated (it's not the case on Debian-like distributions, they decided to garbage sessions with their own cron and altered the php.ini so that it never launch any gc, so check the session.gc_probability and session.gc_divisor). The main problem of sessionstorage in database is that it means a lot of write queries and a lot of conflicting access in the database. This is a great way of stressing a database server like MySQL. So IMHO using another solution is better, this keeps your read/write ratio in a better web-database way.
You could also keep the file storage system and simply share the file directory between servers with NFS. Alter the session.save_path setting to use something other than /tmp. But NFS is by definition not the fastest wày of using a disk. Prefer memcached or mongodb for fast access.
If the only thing you need to share between the server is authentification, then instead of sharing the real session storage you could share authentification credentials. Like the OpenId system in SO, it's what we call an SSO, for the web part you have several solutions, from OpenId to CAS, and others. If the data is merged on the client side (ajax, ESI-gate) then you do not really need a common session data storage on server-side. This will avoid having 3 of your 5 impacted web application writing data in the shared session in the same time. Other session sharing techniques (database, NFS, even memcached) are mostly used to share your data between several servers because Load Balancing tools can put your sequential HTTP request from one server to another, but if you really mean parallel gathering of data you should really study SSO.
Another option would be to use memcached to store the sessions.
The important thing is that you must have a shared resource - be it a SQL database, memcached, a NoSQL database, etc. that all servers can access. You then use session_set_save_handler to access the shared resource.
Store sessions in a database which is accessible from the whole server pool.
Store it in a database - get all servers to connect to that same database. First result for "php store session in database"

How do I share objects between multiple get requests in PHP?

I created a small and very simple REST-based webservice with PHP.
This service gets data from a different server and returns the result. It's more like a proxy rather than a full service.
Client --(REST call)--> PHP Webservice --(Relay call)--> Remote server
<-- Return data ---
In order to keep costs as low as possible I want to implement a caching table on the PHP webservice system by maintaining data for a period of time in server memory and only re-request the data after a timeout (let's say after 30 mins).
In pseudo-code I basically want to do this:
$id = $_GET["id"];
$result = null;
if (isInCache($id) && !cacheExpired($id, 30)){
$result = getFromCache($id);
}
else{
$result = getDataFromRemoteServer($id);
saveToCache($result);
}
printData($result);
The code above should get data from a remote server which is identified by an id. If it is in the cache and 30 mins have not passed yet the data should be read from the cache and returned as a result of the webservice call. If not, the remote server should be queried.
While thinking on how to do this I realized 2 important aspects:
I don't want to use filesystem I/O operation because of performance concerns.
Instead, I want to keep the cache in memory. So, no MySQL or local
file operations.
I can't use sessions because the cached data must be shared across different users, browsers and internet connections worldwide.
So, if I could somehow share objects in memory between multiple GET requests, I would be able to implement this caching system pretty easily I think.
But how could I do that?
Edit: I forgot to mention that I cannot install any modules on that PHP server. It's a pure "webhosting-only" service.
I would not implement the cache on the (PHP) application level. REST is HTTP, therefore you should use a caching HTTP proxy between the internet and the web server. Both servers, the web server and the proxy could live on the same machine as long as the application grows (if you worry about costs).
I see two fundamental problems when it comes to application or server level caching:
using memcached would lead to a situation where it is required that a user session is bound to the physical server where the memcache exists. This makes horizontal scaling a lot more complicated (and expensive)
software should being developed in layers. caching should not being part of the application layer (and/or business logic). It is a different layer using specialized components. And as there are well known solutions for this (HTTP caching proxy) they should being used in favour of self crafted solutions.
Well, if you do have to use PHP, and you cannot modify the server, and you do want in-memory caching for performance reasons (without first measuring that any other solution has good enough performance), then the solution for you must be to change the webhosting.
Otherwise, you won't be able to do it. PHP does not really have any memory-sharing facilities available. The usual approach is to use Memcached or Redis or something else that runs separately.
And for a starter and proof-of-concept, I'd really go with a file-based cache. Accessing a file instead of requesting a remote resource is WAY faster. In fact, you'd probably not notice the difference between file cache and memory cache.

Memcache clustering for php sessions?

Here's a little background, currently i have
3 web servers
one db server which also host memcache for php sessions for the 3 web servers.
I have the php configs on the 3 servers to point to the memcache server for sessions. It was working fine until alot of connections were being produced for reads etc, which then caused connection timeouts.
So I'm currently looking at clustering the memcache on each web server for sessions, my only concern is how to go about making sure that memcache on all the servers have the same information for sessions.
Someone guided me to http://github.com/trs21219/Memcached-Library because i am using codeigniter but how do i converge my php sessions onto this since memcache seems as a key-value store? Thanks in advance.
Has anyone checked out http://repcached.sourceforge.net/ and does it work?
I'm not sure you have the same expectations of memcache that its designers had.
First, however, memcache distribution works differently than you expect: there is no mechanism to replicate stored information. Each memcache instance is a simple key-value store, as you've noticed. The distribution is done by the client code which has a list of all configured memcache instances and does a hash of the key to direct it to one of the instances. It is possible for the client to store it everywhere and retrieve it locally, or for it to hash it multiple times for redundancy, but these are not straightforward exercises.
But the other issue is that memcache is designed for reasonably short-lived data that memcache is allowed to throw away at any time. This makes it really good for caching frequently accessed data that can be a little stale (say up to a few minutes old) but might be expensive to retrieve (such as almost a minute to generate from a query).
PHP sessions don't really qualify for this, in my experience. A database can easily support many thousands of PHP sessions with barely visible traffic, but you need a lot of memcache storage to support the same number: 50k per session and 5000 sessions means close to 256Mb, and then there is all the other data you want to put in there. Not enough storage and you get lots of unexplained logouts (as memcache discards session data when under memory pressure) and thus lots of annoyed users who have to keep logging in again.
We've found GREAT advantage applying MongoDB instead of MySQL for most things, including session handling. It's far faster, far smaller, far easier. We keep MySQL around for transactional needs, but everything else goes into Mongo now. We've relegated memcache to simply caching pages and other data that isn't critical if it's there or not, something like smarty does.
There is no need to use some 3rd party libraries to organize memcached "cluster".
http://ru.php.net/manual/en/memcached.addserver.php
Just use this function to add several servers into the pool and after that data will be stored and distributed over those servers. The server for storing/retrieving the data for the specific key will be selected according to consistent key distribution option.
So in this case you don't need to worry about "how to go about making sure that memcache on all the servers have the same information for sessions"

Are there problems using PHP sessions in a server cluster?

We are developing a web site in PHP, and we have to use sessions. The site will be published in a server cluster. How can we make that work?
Thanks.
Yes this is possible, you need to store your sessions in a central location like a database though. This is pretty simple and just requires you to make some changes to session_set_save_handler - there's a good example of the process you need to follow here
I would use memcache to store your sessions. It will be much faster than storing them in a database or disk.
Database storage is good but you will need more databases when your site becomes very high traffic. Sessions on disk will also cause a lot of IO issues when your site gets a lot of traffic. Memcache on the other hand scales much better than a DB and files.
I personally use memecache and the sites i work on get millions of hits a day. I have never had any issues with storing sessions in memcache.
If you've got multiple PHP boxes, you'll want a central session store.
Your best choices are probably database (that link from seengee's answer is a good explanation) or a dedicated memcache box.
A shared NFS mount for the session directory would be an option, though I've always found nfs performance a bit slow. Alternatives are to write your own session handler using memcache or database for the sessions.
An alternative option is to load balance your web servers using sticky sessions, which will ensure that requests from the same client always go to the same server during the course of the session.

Categories