Is there any way to save PHP sessions in RAM? - php

I have php with nginx. I want to make PHP save it's sessions on RAM for security reasons. Is there any way doing it?
If it's impossible, is there is any advice to make php sessions unrecoverable from hard disk after the server is shutdown?
After a lot of searching I've found the Shared memory module of php, which can be used like persistent memory cache over sessions. is it shared with other applications too?, and how secure is it?

I would use memcached to store session data in RAM. If you are already using a database you might simply use a memory storage engine. However, I don't get what security reasons you have in mind. If you have concerns that somebody is able to access your session data then make sure that he is not able to do so. Regardless where they are stored as otherwise the security is completely broken.
Update
You told that the client and the server are running on the same physical machine. I can imagine of a kiosk application.
As a general advice the client needs to run as a different user. This is possible in Windows too. Then make sure that the client has a limited system access and is not able to access the secret data. that's it.
You might also consider to separate server and client using virtual machines.

Related

How to manage sessions on common database for multiple servers in PHP? [duplicate]

Hi I have to retrieve data from several web servers. First I login as a user to my web site. After successfull login I have to fetch data from different web servers and display. How can I share a single session with multiple servers. How can I achieve this?
When I first login it create session and session id saved on temp folder of that server. When I try to access another server how can I use current session that already created when I logged in. Can anybody suggest a solution?
You'll have to use another session handler.
You can:
build your own (see session_set_save_handler) or
use extensions that provide their own session handler, like memcached
In complement to all these answers:
If you store sessions in databases, check that garbage collecting of sessions in PHP is really activated (it's not the case on Debian-like distributions, they decided to garbage sessions with their own cron and altered the php.ini so that it never launch any gc, so check the session.gc_probability and session.gc_divisor). The main problem of sessionstorage in database is that it means a lot of write queries and a lot of conflicting access in the database. This is a great way of stressing a database server like MySQL. So IMHO using another solution is better, this keeps your read/write ratio in a better web-database way.
You could also keep the file storage system and simply share the file directory between servers with NFS. Alter the session.save_path setting to use something other than /tmp. But NFS is by definition not the fastest wày of using a disk. Prefer memcached or mongodb for fast access.
If the only thing you need to share between the server is authentification, then instead of sharing the real session storage you could share authentification credentials. Like the OpenId system in SO, it's what we call an SSO, for the web part you have several solutions, from OpenId to CAS, and others. If the data is merged on the client side (ajax, ESI-gate) then you do not really need a common session data storage on server-side. This will avoid having 3 of your 5 impacted web application writing data in the shared session in the same time. Other session sharing techniques (database, NFS, even memcached) are mostly used to share your data between several servers because Load Balancing tools can put your sequential HTTP request from one server to another, but if you really mean parallel gathering of data you should really study SSO.
Another option would be to use memcached to store the sessions.
The important thing is that you must have a shared resource - be it a SQL database, memcached, a NoSQL database, etc. that all servers can access. You then use session_set_save_handler to access the shared resource.
Store sessions in a database which is accessible from the whole server pool.
Store it in a database - get all servers to connect to that same database. First result for "php store session in database"

PHP Sessions to handle Multiple Servers

All,
I have a PHP5 web application written with Zend Framework and MVC. This application is installed on 2 servers with the same setup. Server X has php5/MySql/Apache and Server Y also have the same. We don't have a common DB server between both the servers.
My application works when accessed individually via https on Server X and Server Y. But when we turn on load balancing and have both servers up, the sessions get lost.
How can I make sure my sessions persist across servers? Should I maintain my db on a third server and write sessions to it? IF so, what's the easiest and most secure way to do it?
Thanks
memcached is a popular way to solve this problem. You just need to get it up and running (easy) and update your php.ini file to tell it to use memcached as the session storage.
In php.ini you would modify:
session.save_handler = memcache
session.save_path = ""
For the general idea: PHP Sessions in Memcached.
There are any number of tutorials on setting up the Zend session handler to work with memcached. Take your pick.
Should I maintain my db on a third
server and write sessions to it?
Yes, one way to handle it is to have a 3rd machine running the database that both webservers use for the application. I've done that for several projects in the past and its worked well. The question with that approach is... is the bottleneck at the webservers or the database. If its at the database, you wont see much improvement by throwing load balancing of the web servers into the mix. You may need to instead think of mirroring schemes for the database.
Another option is to use the sticky sessions feature on your load balancer. What this will do is keep users on certain servers. So when user 1 comes to the site, they will be directed to server X. Every subsequent request will also be directed to server X. This allows you to not worry about persisting sessions between servers, as each user will continue to be directed to the server they have their session on.
The one downside of this is that when you take a web server out of the pool, half the users with a session will be logged out. So the effectiveness of this solution depends on how often you take servers out of the pool.

Are there problems using PHP sessions in a server cluster?

We are developing a web site in PHP, and we have to use sessions. The site will be published in a server cluster. How can we make that work?
Thanks.
Yes this is possible, you need to store your sessions in a central location like a database though. This is pretty simple and just requires you to make some changes to session_set_save_handler - there's a good example of the process you need to follow here
I would use memcache to store your sessions. It will be much faster than storing them in a database or disk.
Database storage is good but you will need more databases when your site becomes very high traffic. Sessions on disk will also cause a lot of IO issues when your site gets a lot of traffic. Memcache on the other hand scales much better than a DB and files.
I personally use memecache and the sites i work on get millions of hits a day. I have never had any issues with storing sessions in memcache.
If you've got multiple PHP boxes, you'll want a central session store.
Your best choices are probably database (that link from seengee's answer is a good explanation) or a dedicated memcache box.
A shared NFS mount for the session directory would be an option, though I've always found nfs performance a bit slow. Alternatives are to write your own session handler using memcache or database for the sessions.
An alternative option is to load balance your web servers using sticky sessions, which will ensure that requests from the same client always go to the same server during the course of the session.

Methods for caching PHP objects to file?

In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data.
By far one of the best ASPNET features, IMO.
I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP.
Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with:
Serializing to file (a pretty slow/expensive process)
Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops)
Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op)
I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account).
So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security?
Is there another persist-to-file method that I'm forgetting?
Are there any better methods of caching in "limited" environments?
Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC, they're both free and highly performant. This article on different caching techniques in PHP might also be of interest.
Re: Is there another persist-to-file method that I'm forgetting?
It's of limited utility but if you have a particularly beefy database query you could write the serialized object back out to an indexed database table. You'd still have the overhead of a database query, but it would be a simple select as opposed to the beefy query.
Re: Is persisting to file secure? and cheap shared hosting account)
The sad fact is cheap shared hosting isn't secure. How much do you trust the 100,500, or 1000 other people who have access to your server? For historic and (ironically) security reasons, shared hosting environments have PHP/Apache running as a unprivileged user (with PHP running as an Apache module). The security rational here is if the world facing apache process gets compromised, the exploiters only have access to an unprivileged account that can't screw with important system files.
The bad part is, that means whenever you write to a file using PHP, the owner of that file is the same unprivileged Apache user. This is true for every user on the system, which means anyone has read and write access to the files. The theoretical hackers in the above scenario would also have access to the files.
There's also a persistent bad practice in PHP of giving a directory permissions of 777 to directories and files to enable the unprivileged apache user to write files out, and then leaving the directory or file in that state. That gives anyone on the system read/write access.
Finally, you may think obscurity saves you. "There's no way they can know where my secret cache files are", but you'd be wrong. Shared hosting sets up users in the same group, and most default file masks will give your group users read permission on files you create. SSH into your shared hosting account sometime, navigate up a directory, and you can usually start browsing through other users files on the system. This can be used to sniff out writable files.
The solutions aren't pretty. Some hosts will offer a CGI Wrapper that lets you run PHP as a CGI. The benefit here is PHP will run as the owner of the script, which means it will run as you instead of the unprivileged user. Problem averted! New Problem! Traditional CGI is slow as molasses in February.
There is FastCGI, but FastCGI is finicky and requires constant tuning. Not many shared hosts offer it. If you find one that does, chances are they'll have APC enabled, and may even be able to provide a mechanism for memcached.
I had a similar problem, and thus wrote a solution, a memory cache written in PHP. It only requires the PHP build to support sockets. Other then that, it is a pure php solution and should run just fine on Shared hosting.
http://code.google.com/p/php-object-cache/
What I always do if I have to be able to write is to ensure I'm not writing anywhere I have PHP code. Typically my directory structure looks something like this (it's varied between projects, but this is the general idea):
project/
app/
html/
index.php
data/
cache/
app is not writable by the web server (neither is index.php, preferably). cache is writable and used for caching things such as parsed templates and objects. data is possibly writable, depending on need. That is, if the users upload data, it goes into data.
The web server gets pointed to project/html and whatever method is convenient is used to set up index.php as the script to run for every page in the project. You can use mod_rewrite in Apache, or content negotiation (my preference but often not possible), or whatever other method you like.
All your real code lives in app, which is not directly accessible by the web server, but should be added to the PHP path.
This has worked quite well for me for several projects. I've even been able to get, for instance, Wikimedia to work with a modified version of this structure.
Oh... and I'd use serialize()/unserialize() to do the caching, although generating PHP code has a certain appeal. All the templating engines I know of generate PHP code to execute, making post-parse very fast.
If you have access to the Database Query Cache (ie. MySQL) you could go with serializing your objects and storing them in the DB. The database will take care of holding the query results in memory so that should be pretty fast.
You don't spell out -why- you're trying to cache objects. Are you trying to speed up a slow database query, work around expensive object instantiation, avoid repeated generation of complex page, maintain application state or are you just compulsively storing away objects in case of a long winter?
The best solution, given the atrocious limitations of most low-cost shared hosting, is going to depend on what you're trying to accomplish. Going for bottom of the barrel shared-hosting means you have to accept that you won't be working with the best tools. The numbers are hard to quantify, but there's a trade off between hosting costs, site performance & developer time (ie - fast, cheap or easy).
It's in theory possible to store objects in sessions. That might get you past the file writing disabled problem. Additionally you could store the session in a mysql memory backed table to speed up the query.
Some hosting places may have APC compiled in.. That would allow you to store the objects in memory.

Securing DB and session-data on a PHP shared host

I wrote a PHP web-application using SQLite and sessions stored on filesystem.
This is functionally fine and attractively low maintenance. But, now it needs to run on a shared host.
All web-applications on the shared host run as the same user, so my users' session data is vulnerable, as is the database, code, etc.
Many recommend storing sessions in DBMS such as MySQL in this situation. So at first I thought I will just do that, and move the SQLite data into MySQL too. But then I realized the MySQL credentials need to be readable by the web application user, so I'm back to square one.
I think the best solution is to use PHP as a CGI so it runs as different user for each web-application. This sounds great, but my host does not do this it uses mod_php. Are there any drawbacks from an admin's point-of-view for enabling this? (performance, backward compatibility, etc)? If not then I will ask them to enable this.
Otherwise, is there anything I can do to secure my database and session data in this situation?
As long as your code is running as the shared web user, anything stored on the server is going to be vulnerable. Any other user could write a PHP script to examine any readable file on the server, including your data and PHP code.
If your hosting provider will allow it, running as PHP as a CGI under a different user will help, but I expect there will be a significant performance hit, as each request will require a new process to be created. (You could look at FCGI as a better-performing alternative.)
The other approach would be to set a cookie based on something the user provides, and use that to encrypt session data. For instance, when the user logs in, take a hash of their username, password (as just supplied by them) and the current time, encrypt the session data with the hash, set a cookie containing the hash. On the next request, you'll get the cookie back, which you can then use to decrypt the session data. Note however that this will only protect the current session data; your user table, other data, and code will still be vulnerable.
In this situation, you need to decide whether the tradeoff of the low cost of shared hosting is acceptable considering the reduced security it provides. This will depend on your application, and it may be that rather than trying to come up with a complex (and possibly not even very effective) way to add security, you're better off just accepting the risk.
I don't view security as all or nothing. There are steps you can take. Give the web db user only the permissions it needs. Store passwords as hashes. Use openid login so users provide their credentials over SSL.
PHP on cgi can be slower and some hosts may simply not want to support more than one environment.
You may need to stick with your host for some reason, but generally there are so many available that it is a good reminder for people to compare functionality and security as well as cost. I have noticed many companies starting to offer virtual machine hosting -- nearly dedicated server level security in terms of isolating your code from other users -- at what is to me reasonable cost.
A shared host is no way to run a web site if you are conscious about privacy and security of your data from the sites that you share the server with. Anything accessible to your web application is fair game for the others; it'll only be a matter of time before they can access it (assuming they do have incentive to do that to you).
"you can place your DB connection variables in a file below the web root. this will at least protect it from web access. if you're going to use file based sessions as well, you can set the session path in your user's directory and again outside the web root."
I don't have an account so I can't downvote that.. but seriously it is not even relevant to the question.
Duh you store stuff outside the webroot. That goes for any hosting scenario and is not specific to shared hosting. We're not talking about protecting from outsiders here. We're talking about protecting from other applications on the same machine.
To the OP I think PHP as CGI is the most secure solution, as you already suggested yourself. But as someone else said there is a performance hit with this.
Something you might look at is moving your sessions and db to MySQL and using safe_mode and/or open_basedir.
I would solve the problem with a infrasturcture change instead of a code one.
Consider upgrading to a VPS server. Nowdays you can get them very inexpensive. I've seen VPS's starting # 10$/mo.

Categories