PHP Mongo Session Handler and Sharding - php

I am using
https://github.com/symfony/symfony/blob/master/src/Symfony/Component/HttpFoundation/Session/Storage/Handler/MongoDbSessionHandler.php
as my session handler for my PHP app.
Our app is expecting high volume traffic, so we are experimenting with sharding the Mongo that store session.
I have set up my shard according the the document, and things are all fine before I shard the collection (I can see session document being created in the session collection).
As soon as I enabled sharding on session using sharding key sess_id, then no session document gets created (i.e. count never change), and I see these lines in the mongos log whenever I visit my PHP page:
resetting shard version of mydb.session on my.hsard.ip.address:port, version is zero
I have tried to shard my other collection and it works fine, so that tells me my sharding setup is correct.
Anybody have a clue of what might be wrong? I'm using Mongo 2.2.3.

Found the cause.
It was because the write function in MongoDbSessionHandler.php that comes with symfony version we are using will not work with sharding.
The MongoDbSessionHandler.php we are using implements write like this:
$mongo->update({shardkey:1}, {shardkey:1, data:2}, {upsert:true})
which is not allowed, because you cannot change the shard key.
Solution is to simply use the latest version of MongoDbSessionHandler.php in the github.

Related

Getting Inconsistent data from multi-node memcached cluster AWS

I am using Moodle 2.7.2 for our application on load balanced environment. I am using AWS elastic cache memcached cluster with multiple node.
Whenever i am doing any confrontational changes or database update, on front-end some times new changes reflects but some time old data are displayed.
I researched about this issue and found that i should set
memcached.sess_consistent_hash=On
I changes this and restarted the server but still i am getting inconsistent data.
I guess the problem you have to solve is the cache and permanent storage updates when you have dirty data.
The consistenthash parameter is for how the data should be distributed in cluster.
For your problem, there are various strategies for this like write-back, write-through, write-around. Typically if consistency and durability are important, one will choose write-through. Also, for lots of read and less write operations - write-through is a good fit.
Hope it helps!

Session Storage with Redis Sentinel in PHP

I'm currently working through the process of scaling out our server setup and we're to the point where we need to reconfigure the sessions to be stored in a high availability solution (having people logged out if a server goes down is not an option). We're trying to use Redis because we're already using it for other parts of the site. The trouble I run into is that there doesn't appear to be any support for this. Before I create my own session handler class, I thought I would ask if anyone else knows if there is a project for this use case.

Symfony session is empty after login when using a custom DynamoDB session handler

I'm pulling my hair out over this one.
I have implemented a Symfony session handler using DynamoDB and the AWS PHP SDK: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/feature-dynamodb-session-handler.html
The session handler appears to be working completely fine on my local machine, I can see the session is created correctly in Dynamo, then when I login a new session is created and the data is migrated to it as expected. All good here.
The problem is, when I push this up to my staging or production servers on AWS, something is going wrong when the session is migrated. I go to my login page and I can see the session has been created as expected, then when I login, a new session is created but the data is NOT migrated to it, causing it to dump me back to the login page.
I've spent the last two days digging around trying to work out where it's going wrong, but I can't figure it out.
I've tried every suggestion in this bug thread but none of them worked, so I'm assuming I may be dealing with a separate issue: https://github.com/symfony/symfony/issues/6417
I've also tried using the pessimistic locking_strategy which does not seem to make any difference.
The staging and production servers have the exact same config as my local setup, minus xDebug.
I've put the staging server into dev mode with debugging enabled to try and find the issue in the profiler but I can't see anything of interest in there, the requests are as follows:
GET domain.com/login (session a)
POST domain.com/login_check (session a)
GET domain.com (session b)
GET domain.com/login (session b)
The pattern above keeps on repeating.
Any direction on how to debug this would be appreciated, I'm not even sure where to look, especially seeing as I can't reproduce on my local machine with xDebug.
This turned out to be a problem with the PHP jsonc extension where json_decode was breaking if there were null bytes (a serialized protected method has null bytes). It has been fixed since version 1.3.3.
http://pecl.php.net/package-changelog.php?package=jsonc&release=1.3.5

Is there a PHP/MySQL database admin that doesn't use sessions?

I am doing development work on a site with a strange server set up where sessions basically don't work. It's kind of a long story, but the main crux is it's a cluster of servers that are syncronized from an FTP server every few minutes. And for example, anything written to the filesystem in PHP gets deleted within 5 minutes.
So this means sessions don't work and I get some strange problems in phpMyAdmin, like it forgetting which page of a table I was on - I click 'next page' and end up back at the start again.
I've also tried SQL Buddy and am getting similar problems.
Are there any equivalents that don't use sessions? Doesn't need to be as full-featured as PMA, it's mainly for adding/editing some stuff.
There's always the MySQL GUI Tools.
You can make phpmyadmin use other authentication methods:
http://www.phpmyadmin.net/documentation/#authentication_modes
Depends how much security you need and how restricted you are, but 'config' authentication mode with a custom .htaccess sounds like it might work for you.
I don't know how hard it would be to plug this into phpMyAdmin, but PHP has a functionnality that allows sessions to be stored in another way than using files.
In your case, you already have a database server, obviously, so maybe you could create a "technical" database, and use it to store sessions ? This way, you would still be able use phpMyAdmin (which is quite a good tool), but your problem should be solved.
The PHP function you need to know to do that is session_set_save_handler :
session_set_save_handler() sets the
user-level session storage functions
which are used for storing and
retrieving data associated with a
session.
This is most useful when a
storage method other than those
supplied by PHP sessions is preferred.
i.e. Storing the session data in a
local database.
There are a couple of examples (take a look at the comments at the bottom of the page : some might be helpful)
For instance, Drupal uses this solution to store sessions into DB instead of files, by default.
Another solution would be to use memcached to store your sessions -- of course, if you don't have a memcached server at your disposal, this might be a bit harder than storing them in DB ^^
Or, of course, if you have access to your DB server via the network, you could install phpMyAdmin on your local computer, or use a tool like MySQL GUI Tools and its MySQL Query Browser.
I found a neat solution! SQLBuddy has a feature where you can put the password in the config file and it will use it automatically without a need to log in.
Obviously this is insecure by default, but coupled with a .htaccess and .htpasswd (which does work on the server) I've now got a secure login.

PHP sessions in a load balancing cluster - how?

OK, so I've got this totally rare an unique scenario of a load balanced PHP website. The bummer is - it didn't used to be load balanced. Now we're starting to get issues...
Currently the only issue is with PHP sessions. Naturally nobody thought of this issue at first so the PHP session configuration was left at its defaults. Thus both servers have their own little stash of session files, and woe is the user who gets the next request thrown to the other server, because that doesn't have the session he created on the first one.
Now, I've been reading PHP manual on how to solve this situation. There I found the nice function of session_set_save_handler(). (And, coincidentally, this topic on SO) Neat. Except I'll have to call this function in all the pages of the website. And developers of future pages would have to remember to call it all the time as well. Feels kinda clumsy, not to mention probably violating a dozen best coding practices. It would be much nicer if I could just flip some global configuration option and VoilĂ  - the sessions all get magically stored in a DB or a memory cache or something.
Any ideas on how to do this?
Added: To clarify - I expect this to be a standard situation with a standard solution. FYI - I have a MySQL DB available. Surely there must be some ready-to-use code out there that solves this? I can, of course, write my own session saving stuff and auto_prepend option pointed out by Greg seems promising - but that would feel like reinventing the wheel. :P
Added 2: The load balancing is DNS based. I'm not sure how this works, but I guess it should be something like this.
Added 3: OK, I see that one solution is to use auto_prepend option to insert a call to session_set_save_handler() in every script and write my own DB persister, perhaps throwing in calls to memcached for better performance. Fair enough.
Is there also some way that I could avoid coding all this myself? Like some famous and well-tested PHP plugin?
Added much, much later: This is the way I went in the end: How to properly implement a custom session persister in PHP + MySQL?
Also, I simply included the session handler manually in all pages.
You could set PHP to handle the sessions in the database, so all your servers share same session information as all servers use the same database for that.
A good tutorial for that can be found here.
The way we handle this is through memcached. All it takes is changing the php.ini similar to the following:
session.save_handler = memcache
session.save_path = "tcp://path.to.memcached.server:11211"
We use AWS ElastiCache, so the server path is a domain, but I'm sure it'd be similar for local memcached as well.
This method doesn't require any application code changes.
You don't mentioned what technology you are using for load balancing (software, hardware etc.); but in any case, the solution to your problem is to employ "sticky sessions" on the load balancer.
In summary, this means that when the first request from a "new" visitor comes in, they are assigned a specific server from the cluster: all future requests for the lifetime of their session are then directed to that server. In practice this means that applications written to work on a single server can be up-scaled to a balanced environment with zero/few code changes.
If you are using a hardware balancer, such as a Radware device, then the sticky sessions is configured as part of the cluster setup. Hardware devices usually give you more fine-grained control: such as which server a new user is assigned to (they can check for health status etc. and pick the most healthy / least utilised server), and more control of what happens when a server fails and drops out of the cluster. The drawback of hardware balancers is the cost - but they are worth it imho.
As for software balancers, it comes down to what you are using. For Apache there is the stickysession property on mod_proxy - and plenty of articles via google to get this working with the php session ( for example )
Edit:
From other comments posted after the original question, it sounds like your "balancing" is done via Round Robin DNS, so the above probably won't apply. I'll refrain from commenting further and starting a flame against round robin dns.
The easiest thing to do is configure your load balancer to always send the same session to the same server.
If you still want to use session_set_save_handler then maybe take a look at auto_prepend.
If you have time and you still want to check more solutions, take a look at
http://redis4you.com/articles.php?id=01..
Using redis you are fault tolerant. From my point of view, it could be better than memcache solutions because of this robustness.
If you are using php sessions you could share with NFS the /tmp directory, where I think the sessions are stored, between all the servers in the cluster. That way you don't need database.
Edited: You can also use an external service like memcachedb (persistent and fast) and store the session info in the memcachedb index and indentify it with a hash of the content or even the session ID.
When we had this situation we implemented some code that lives in a common header.
Essentially for each page we check if we know the session Id. If we dont we check if we're in the situation whehich you describe, by checking if we have stored sesion data in the DB.Otherwise we just start a new session.
Obviously this requires all relevant data to be copied to the DB, but if you encapsulate your session data in a seperate class then it works OK.
you could also try using memcache as session handler
Might be too late, but check this out: http://www.pureftpd.org/project/sharedance
Sharedance is a high-performance server to centralize ephemeral key/data
pairs on remote hosts, without the overhead and the complexity of an SQL
database.
It was mainly designed to share caches and sessions between a pool of web
servers. Access to a sharedance server is trivial through a simple PHP API and
it is compatible with the expectations of PHP 4 and PHP 5 session handlers.
When it comes to php session handling in the Load Balancing Cluster, it's best to have Sticky Sessions. For that ask the network of datacenter who is maintaining the load balancer to enable the sticky session. Once that is enabled you'll don't need worry about sessions at php end

Categories