I'm currently storing a fair amount of data in the $_SESSION variable. I'm doing this so I don't need to keep accessing the database.
Should I be worried about memory issues on a shared server?
Can servers cope with large amounts of data stored in the $_SESSION variable?
Should I be worried about memory issues on a shared server?
Yes - session data is loaded into the script's memory on every request. Hence, you are at risk of breaking the individual per-script memory limit. Even if you don't hit the limit, this is really inefficient.
Accessing data from the database on demand is much better.
.. in addition to what #Pekka wrote:
PHP sessions an not alternative to a caching solution !
You should investigate if your server has APC available. You should use that on top of layer which accesses information from database (assuming you actually have an OO code).
Related
What is a good method to retain a small piece of data across multiple calls of a PHP script, in such a way that any call of the script can access and modify its value - no matter who calls the script?
I mean something similar to $_SESSION variable, except session is strictly per-client; one client can't access another client's session. This one would be the same, no matter who accesses it.
Losing the value or having it corrupted (e.g. through race conditions of two scripts launched at once) is not a big problem - if it reads correctly 90% of the time it's satisfactory. OTOH, while I know I could just use a simple file on disk, I'd still prefer a RAM-based solution, not just for speed, but this running from not very wear-proof flash, endless writes would be bad.
Take a look at shared memory functions. There are two libraries that can be used to access shared memory:
Semaphores
Shared Memory
For storing binary data or one huge String, the Shared Memory library is better, whereas the Semahpores library provides convenient functions to store multiple variables of different types (at the cost of some overhead, that can be quite significant especially for a lot of small-sized (boolean for example) variables.
If that is too complex, and/or you don't worry about performance, you could just store the data in files (after all, PHPs internal session management uses files, too....)
A good alternative to using a database would be memcache!
I am developing a website and after the login authentication, i am using $_SESSION super global array to pass my data to other pages and display when required. This is how i am doing this. Its my own little MVC framework.
//please ignore the syntax errors
$recieved_data = $this->registry->{auth_login}($username, $password);
//$recieved_data holds records like (fname,lname,email,username,password)
$_SESSION = $recieved_data;
//Or should i choose PHP cache instead at this point?
My website will have a huge traffic after some time. In this particular case, should i choose php cache or keep continue using $_SESSION?
I know i cannot ignore the use of sessions completely but what are the right options in my case?
Today, i surprised when i set the $_SESSION array with different index names in all the projects and used print_r($_SESSION) function to check the available sessions in $_SESSION Array in any one of the project.
It showed me all active sessions belonging to different project folders. Is it fine if the $_SESSION are globally available in all other projects or its my fault somewhere?
I am using Xampp 1.8.3 with PHP version 5.5.3 and Netbeans 7.4 (candidate release) for writing code. I would be thankful for expert guideline.
Basic rule: Don't abuse the session as a cache!
Here's why: The session data is read every time a request is made. All of it. And it is written back every time a request ends.
So if you write some data into it as a cache, that isn't used in every request, you are constantly reading and writing data that is not needed in this request.
Low amount of data will not affect performance significantly, but serializing, unserializing and disk or network I/O of huge amounts of data will affect it. And you miss the opportunity to use that data shared between multiple sessions.
On the other hand, a cache is no session storage, for obvious reasons: It is shared between all sessions and cannot really contain private data.
Regarding optimization for more traffic: You cannot optimize right now. Whatever usage pattern will evolve, you will only then see where performance is really needed. And you probably will want to scale - with the easiest way being to scale with some sort of cloud service instead of hosting it on your own hardware.
There is certain userdata read from the (MySQL) database that will be needed in subsequent page-requests, say the name of the user or some preferences.
Is it beneficial to store this data in the $_SESSION variable to save on database lookups?
We're talking (potentially) lots of users. I'd imagine storing in $_SESSION contributes to RAM usage (very-small-amount times very-many-users) while accessing the database on every page request for the same data again and again should increase disk activity.
The irony of your question is that, for most systems, once you get a large number of users, you need to find a way to get your sessions out of the default on-disk storage and into a separate persistence layer (i.e. database, in-memory cache, etc.). This is because at some point you need multiple application servers, and it is usually a lot easier not to have to maintain state on the application servers themselves.
A number of large systems utilize in-memory caching (memcached or similar) for session persistence, as it can provide a common persistence layer available to multiple front-end servers and doesn't require long time persistence (on-disk storage) of the data.
Well-designed database tables or other disk-based key-value stores can also be successfully used, though they might not be as performant as in-memory storage. However, they may be cheaper to operate depending on how much data you are expecting to store with each session key (holding large quantities of data in RAM is typically more expensive than storing on disk).
Understanding the size of session data (average size and maximum size), the number of concurrent sessions you expect to support, and the frequency with which the session data will need to be accessed will be important in helping you decide what solution is best for your situation.
You can use multiple storage backends for session data in PHP. Per default its saved to files. One file for one session. You can also use a database as session backend or whatever you wan't by implementing you own session save handler
If you want your application most scalable I would not use sessions on file system. Imagine you have a setup with mutiple web servers all serving your site as a farm. When using session on filesystem a user had to be redirected to the same server for each request because the session data is only available on that servers filesystem. If you not using sessions on filesystem it would not matter which server is being used for a request. This makes the load balancing much easier.
Instead of using session on filesystem I would suggest
use cookies
use request vars across multiple requests
or (if data is security critical)
use sessions with a database save handler. So data would be available to each webserver that reads from the database (or cluster).
Using sessions has one major drawback: You cannot serve concurrent requests to the user if they all try to start the session to access data. This is because PHP locks the session once it is started by a script to prevent data from getting overwritten by another script. The usual thinking when using session data is that after your call to session_start(), the data is available in $_SESSION and will get written back to the storage after the script ends. Before this happens, you can happily read and write to the session array as you like. PHP ensures this will not destroy or overwrite data by locking it.
Locking the session will kill performance if you want to do a Web2.0 site with plenty of Ajax calls to the server, because every request that needs the session will be executed serially. If you can avoid using the session, it will be beneficial to user's perceived performance.
There are some tricks that might work around the problem:
You can try to release the lock as soon as possible with a call to session_write_close(), but you then have to deal with not being able to write to the session after this call.
If you know some script calls will only read from the session, you might try to implement code that only reads the session data without calling session_start(), and avoid the lock at all.
If I/O is a problem, using a Memcache server for storage might get you some more performance, but does not help you with the locking issue.
Note that the database also has this locking issue with all data it stores in any table. If your DB storage engine is not wisely chosen (like MyISAM instead of InnoDB), you'll lose more performance than you might win with avoiding sessions.
All these discussions are moot if you do not have any performance issues at all right now. Do whatever serves your intentions best. Whatever performance issues you'll run into later we cannot know today, and it would be premature optimization (which is the root of evil) trying to avoid them.
Always obey the first rule of optimization, though: Measure it, and see if a change improved it.
I am developing a website. Currently, I'm on cheapo shared hosting. But a boy can dream and I'm already thinking about what happens with larger numbers of users on my site.
The visitors will require occasional database writes, as their progress in the game on the site is logged.
I thought of minimizing the queries by writing progress and other info live into the $_SESSION variable. And only when the session is destroyed (log out, browser close or timeout), I want to write the contents of $_SESSION to the database.
Questions:
Is that possible? Is there a way to execute a function when the sessions is destroyed by timeout or closing of the browser?
Is that sensical? Are a couple of hundred concurrent SQL queries going to be a problem for a shared server and is the idea of using $_SESSION as a buffer going to alleviate some of this.
Is there a way to execute a function when the sessions is destroyed by
timeout or closing of the browser?
Yes, but it might not work the way you imagine. You can define your own custom session handler using session_set_save_handler, and part of the definition is supplying the destroy and gc callback functions. These two are invoked when a session is destroyed explicitly and when it is destroyed due to having expired, so they do exactly what you ask.
However, session expiration due to timeout does not occur with clockwork precision; it might be a whole lot of time before an expired session is actually "garbage-collected". In addition, garbage collection triggers probabilistically so in theory there is the chance that expired sessions will never be garbage collected.
Is that sensical? Are a couple of hundred concurrent SQL queries going
to be a problem for a shared server and is the idea of using $_SESSION
as a buffer going to alleviate some of this.
I really wouldn't do this for several reasons:
Premature optimization (before you measure, don't just assume that it will be "better").
Session might never be garbage collected; even if this doesn't happen, you don't control when they are collected. This could be a problem.
There is a possibility of losing everything a session contains (e.g. server reboots), which includes player progress. Players do not like losing progress.
Concurrent sessions for the same user would be impossible (whose "saved data" wins and remains persisted to the database?).
What about alternatives?
Well, since we 're talking about el cheapo shared hosting you are definitely not going to be in control of the server so anything that involves PHP extensions (e.g. memcached) is conditional. Database-side caching is also not going to fly. Moreover, the load on your server is going to be affected by variables outside your control so you can't really do any capacity planning.
In any case, I 'd start by making sure that the database itself is structured optimally and that the code is written in a way that minimizes load on the database (free performance just by typing stuff in an editor).
After that, you could introduce read-only caching: usually there is a lot of stuff that you need to display but don't intend to modify. For data that "almost never" gets updated, a session cache that you invalidate whenever you need to could be an easy and very effective improvement (you can even have false positives as regards the invalidation, as long as they are not too many in the grand scheme of things).
Finally, you can add per-request caching (in variables) if you are worried about pulling the same data from the database twice during a single request.
It's not a good idea to write data when the session is destroyed. Since the session datas could be destroyed via a garbage collector configured by your hoster, you don't have any idea when the session is really closed until the user's cookie is out of date.
So... I suggest you to use either a shared memory (RAM) cache system like memcache (if your hoster offers it) or a disk based cache system.
By the way, if your queries are optimized, columns correctly indexed, etc., your shared hosting could take tons of queries at the "same time".
Is that sensical? Are a couple of hundred concurrent SQL queries going to be a problem for a shared server and is the idea of using $_SESSION as a buffer going to alleviate some of this.
No. First and foremost, you never know what happens to a session (logout is obvious, where a time-out is nearly undetectable), so it's not a trustworthy caching mechanism at any rate. If there are results that you query multiple times over the span of a few request, which don't change all too often, save the results of those queries to a dedicated caching mechanism, such as APC, or memcached.
Now, I understand your webhost will not provide these caching systems, but then, you probably can do different things to optimise your site. For starters, my most complex software products (which are fairly complex) query the database about 6 times per page, on average. If the result is reusable, I tend to use caching, so that lowers the number of queries.
On top of that, writing decent queries is more important; the quality of your design and queries is more important than the quantity. If you get your schema, indexes and queries right, 10 queries are faster than one query that's not optimised. I'd invest my time investigating how to write efficient queries, and read up on indexing, rather than trying to overcome the problem with a "workaround", such as caching in a session.
Good luck, I hope your site becomes that big of a success you will actually need the advice above ;)
Actually you could use the $_SESSION as a buffer to avoid duplicate reads, thet seems a good idea to me (memcached even better than that), surely not for delaying writing (that is much more complex and should be handled by the db);
you could use a simple hash that you save in $_SESSION
$cache = array();
$_SESSION['cache'] = $cache;
then when you have to make a query
if(isset($_SESSION['cache'][$id]){
//you have a cache it
$question = $_SESSION['cache'][$id];
}else{
//no cache, retrieve your $question and save it in the cache
$_SESSION['cache'][$id] = $question ;
}
I am working on a quick survey for a company who will be getting about 200k (at peak) visitors hourly for about 2 days straight. I was just wondering if using $_SESSION variables would tie up the server. All that we are storing in those variables are at most a 6 character string or a single digit integer. I'm new to the PHP world so I'm not sure how reliable or how much $_Session variables will tie up the servers. The servers we are using will be cloud servers.
One final note is that the the sessions will only last maybe 6 - 10 minutes tops for each visitor before I close it out.
Any help will be greatly appreciated!
By default, data in $_SESSION will be written to disk upon each call to session_write_close(), or upon script termination. There is no way to know for sure how this will perform without testing the final application on the server hardware you will be using. Since the volume of data is small, the real worry is disk latency. An easy workaround for this would be to set PHP's session_save_path to an in-memory filesystem.
Tie up how? Disk space? Storing a simple 6char string using the default file-based session handler will take up about 6+length-of-variable-name + ~6 chars of space on the disk. There'll be some overhead to load/unserialize the data in the session file. but it'll be much less than the initial overhead of loading/compiling the script that's using the session data.
Remember, PHP's default sessions use the disk as their storage media - they're not persisted in memory after the script exits.
I think you don't want to store data in sessions, because it writes to disk. If someone hits the app with multiple requests, are you able to guarantee that they hit the same machine in the cloud? That's rather complicated to write. I would cookie the user instead.
http://php.about.com/od/learnphp/qt/session_cookie.htm
http://www.quora.com/Does-PHP-handle-sessions-by-writing-session-variable-data-to-disc-or-does-this-information-persist-only-in-RAM-Will-accessing-session-data-cause-a-disc-read-in-PHP
Like the others said, I'd use Memcached if you want to scale, but to answer your question directly, I think your server should be able to handle the usage you describe.
In PHP you can change the session handler. The default session handler is to write data in a temp file, with one file per session. It works okay, but has limitations when runnning high traffic apps (although with 200K/hour you shoudln't have problems with the default handler).
And easy solution is to use the session handler for Memcached, with the PECL/Memcache extension (not to confuse with the PECL/Memcached extension):
http://www.php.net/manual/en/memcache.examples-overview.php (see example #2)