PHP concurrent changes on session variable for same user? - php

Suppose we have a user variable $_SESSION['variable'] that may or may not be modified as the user access a page.
Suppose the same user has several browser windows open and somehow makes simultaneous requests to the server that result on changes to the session variable.
Questions:
How does the server "queue" these changes, since they are targeted at
the same variable? Is there a potential for server error here?
Is there a way to "lock" the session variable for reading/writing in
order to implement some kind of status check before changing its
value?
EDIT
( thanks Unheilig for the cleanup)
Regarding the "queueing", I am interested in what happens if two requests arrive at the same time:
Change X to 1
Change X to 2
I know this doesn't seem a real world scenario, but it just came to my mind when designing something. It could become a problem if the system allows too many concurrent requests from the same user.

Each individual PHP Session is 'locked' between the call to session_start() and either the call to session_write_close() or the end of the request, whichever is sooner.
Most of the time you would never notice this behaviour.
However, if you have a site which does make many concurrent requests* then these requests would appear to queue in first-come-first-served order.
To be clear; in a typical Apache/PHP setup your requests will come in to the server and start your PHP executions concurrently. It is the session_start() call that will block/pause/wait/queue because it's waiting to gain the file-lock on the session file (or similar depending on your session_hander).
To increase request throughput or reduce waiting requests therefore:
Open and close the session (session_start(), do_work(), session_write_close()) as rapidly as possible; to reduce the time the session is locked for writing.
Make sure you're not leaving the session open on requests that are doing long work (accessing 3rd party APIs, generating or manipulating documents, running long database queries etc).. unless absolutely necessary.
Avoid, where possible, touching the session at all. Be as RESTful as possible.
Manage the queuing and debouncing of requests as elegantly as possible on the client side of the application
Hope that helps.
J.
*Ajax & Frame/iFrame heavy applications often exhibit this problem.

Related

Session regenerate causes expired session with fast AJAX calls

My application is a full AJAX web page using Codeigniter Framework and memcached session handler.
Sometimes, it sends a lot of asynchronous calls and if session has to regenerate its ID (to avoid session fixation security issue), the session cookie is not renewed fast enough and some AJAX calls fail due to session id expired.
Here is a schematic picture I made to show clearly the problem :
I walked across the similar threads (for example this one) but the answers doesn't really solve my problem, I can't disable the security as there is only AJAX calls in my application.
Nevertheless, I have an Idea and I would like an opinion before hacking into the Codeigniter session handler classes :
The idea is to manage 2 simultaneous session Ids for a while, for example 30 seconds. This would be a maximum request execution time. Therefore, after session regeneration, the server would still accept the previous session ID, and switch to session to the new one.
Using the same picture that would give something like this :
First of all, your proposed solution is quite reasonable. In fact, the people at OSWAP advise just that:
The web application can implement an additional renewal timeout after which the session ID is automatically renewed. (...) The previous session ID value would still be valid for some time,
accommodating a safety interval, before the client is aware of the new
ID and starts using it. At that time, when the client switches to the
new ID inside the current session, the application invalidates the
previous ID.
Unfortunately this cannot be implemented with PHP's standard session management (or I don't know how to do that). Nevertheless, implementing this behaviour in a custom session driver 1 should not pose any serious problem.
I am now going to make a bold statement: the whole idea of regenerating the session ID periodically, is broken. Now don't get me wrong, regenerating the session ID on login (or more accurately, as OSWAP put it, on "privilege level change") is indeed a very good defense against session fixation.
But regenerating session IDs regularly poses more problems than it solves: during the interval when the two sessions co-exist, they must be synchronised or else one runs the risk loosing information from the expiring session.
There are better (and easier) defenses against simple session theft: use SSL (HTTPS). Periodic session renewal should be regarded as the poor man's workaround to this attack vector.
1 link to the standard PHP way
your problem seems to be less with the actual speed of the requests (though it is a contributing factor) but more with concurrency.
If i understand right, your javascript application makes many (async) ajax calls - fast (presumably in bursts)- and sometimes some of them fail due to session invalidation due to what you think is speed of requests issue.
Well i think that the problem is that you actually have several concurrent requests to the server, while the first one has its session renewed the other essentially cannot see it because the request is already made and waits to be processed by the server.
This problem will of course manifest itself only when doing several requests for the same user simultaneously.
Now The real question here - what in your application business logic demands for this?
It looks to me that you are trying to find a technical solution to a 'business' problem. What i mean is that either you've mis-interpreted your requirements, or the requirements are just not that well thought/specified.
I would advice you to try some of the following:
ask yourself if these multiple simultaneous requests can be combined to one
look deeply into the requirements and try to find the real reason why you do what you do, maybe there is no real business reason for this
every time before you fire the series of requests fire a 'refresh' ajax request to get the new session, and only on success proceed with all the other requests
Hope some of what i've wrote help to guide you to solution.
Good luck

PHP - enforce user wait before using server resources

I have a PHP function that I want to make available publically on the web - but it uses a lot of server resources each time it is called.
What I'd like to happen is that a user who calls this function is forced to wait for some time, before the function is called (or, at the least, before they can call it a second time).
I'd greatly prefer this 'wait' to be enforced on the server-side, so that it can't be overridden by dubious clients.
I plan to insist that users log into an online account.
Is there an efficient way I can make the user wait, without using server resources?
Would 'sleep()' be an appropriate way to do this?
Are there any suggested problems with using sleep()?
Is there a better solution to this?
Excuse my ignorance, and thanks!
sleep would be fine if you were using PHP as a command line tool for example. For a website though, your sleep will hold the connection open. Your webserver will only have a finite number of concurrent connections, so this could be used to DOS your site.
A better - but more involved - way would be to use a job queue. Add the task to a queue which is processed by a scheduled script and update the web page using AJAX or a meta-refresh.
sleep() is a bad idea in almost all possible situations. In your case, it's bad because it keeps the connection to the client open, and most webservers have a limit of open connections.
sleep() will not help you at all. The user could just load the page twice at the same time, and the command would be executed twice right after each other.
Instead, you could save a timestamp in your database for when your function was last invoked. Then, before invoking it, you should check the database to see if a suitable amount of time has passed. If it has, invoke the function and update the timestamp in the database.
If you're planning on enforcing a user login, than the problem just got a whole lot simpler.
Have a record inn the database listing users and the last time they used your resource consuming service, and measure the time difference between then and now. If the time difference is too low, deny access and display an error message.
This is best handled at the server level. No reason to even invoke PHP for repeat requests.
Like many sites, I use Nginx and you can use it's rate-limiting to block repeat requests over a certain number. So like, three requests per IP, per hour.

implementation of ajax status check

I am battling with race condition protection in PHP.
My application is written in symfony 1.4 and PHP locks session data until a page completes processing. I have a long running (~10 second) login script and I want to display a progress bar showing the user what is being done while they wait. (I want to actually display what is being done and not use a standard [faux] loading bar.)
Whenever a script calls session_start(), PHP locks that user's session data until that script completes. This prevents my status check ajax calls from returning anything until the longer running script completes. (My question on why my ajax calls were not asynchronous is here.)
I have devised a way to do this but I want to make sure this way is secure enough for general purposes (i.e.- this is not a banking application).
My idea is:
On authentication of username & password (before the long login script starts), a cookie is set on the client computer with a unique identifier.
This same unique identifier is written to a file on the server along with the client IP address.
While the long login script runs, it will update that file with the status of the login process.
The ajax status check will ping the server on a special page that does not use session_start(). This page will get the cookie value and the client IP and check the server side file for any status updates.
Are there any glaringly obvious problems with this solution?
Again, from the security angle, even if someone hacked this all they would get is a number representing the state of the login progress.
I don't see anything inherently wrong with the approach that you are proposing.
But if your machine has APC installed you can use apc_store and apc_fetch to store your status in a sort of shared memory instead of writing to disk. Use something like apc_store(SID, 'login not started') to initialize and update the request state in memory, then apc_fetch(SID) to retrieve it on subsequent requests.
There are other shared memory systems, including Apache, or even a database connection might be simpler.
I have same problem and think the trick is session_write_close() that frees the session file.
Please see my https://github.com/jlaso/MySession repository and check if this can be apply to your particular question.

Should PHP session be created before login or after successful login

If PHP session is created before login, there will be one session file created for each request to login page.
The problem is if user makes multiple requests to server through a script then those many session files will be created.
If user wants to attack server,he can send abnormally huge number of requests creating so many session files eating up all the temporary space and making the service unavailable.
I am not sure if this kind of attack is really possible/feasible.
Please share your comments on this and implications if PHP sessions is created before/after successful login.
I think you are misunderstanding session_start()
What happens with session_start is, yes, it will create a file for the individual user. But the next time you call session_start(), it is going to use that same file for that same user because the user has a cookie on their system that tells it what ID to use. In order to have the $_SESSION array available, you must call session_start() on every page.
It is very possible that someone could whip up a scenario like you just described.
In reality, yes, a hacker could always have a robot that clears its cookies after every attempt, make 10,000 requests, and possibly create a disk writing problem, but I really wouldn't worry about it too much, because the files are tiny, much smaller than the script you are writing. You'd have to write a lot more files (on the size of millions or billions) to actually create a problem.
If you really want to see what the effects would be on your server. Write a script that creates files in a directory with the equivalent of 2 paragraphs of text. and put it in a loop for 10,000 files.
If you are then worried about the affects it would have, I suggest implementing a tracker that can see an large amount of hits coming to the site from a single IP address and then either temporarily ban the IP address, or do what Google does and just provide them with a static captcha page that doesn't take many resources to serve.
So, going back to the actual 'question':
I set a session for every single user that ever visits my site, because I use sessions for not only User Authentication, but for tracking other variables on my site. So, I believe that you should set it even if they aren't logged in.
If you're worried about a session fixation attack, think about using session_regenerate_id() function.
once you run the session_start() creates the file.
what you can do as an attack is to create a robot to send separate session id in the cookie, but just have to give that one exists.
It doesn't matter, really. Your server, as cheap as it could be, will have enough space to store millions of (almost empty) session files.
Worst it can do is slow down the files access in the folder where session files are stored, but your server's disks should be monitored to begin with, and a quickly filling /tmp partition should raise an alert at some point.

Dealing with long server-side operations using ajax?

I've a particularly long operation that is going to get run when a
user presses a button on an interface and I'm wondering what would be the best
way to indicate this back to the client.
The operation is populating a fact table for a number of years worth of data
which will take roughly 20 minutes so I'm not intending the interface to be
synchronous. Even though it is generating large quantities of data server side,
I'd still like everything to remain responsive since the data for the month the
user is currently viewing will be updated fairly quickly which isn't a problem.
I thought about setting a session variable after the operation has completed
and polling for that session variable. Is this a feasible way to do such a
thing? However, I'm particularly concerned
about the user navigating away/closing their browser and then all status
about the long running job is lost.
Would it be better to perhaps insert a record somewhere lodging the processing record when it has started and finished. Then create some other sort of interface so the user (or users) can monitor the jobs that are currently executing/finished/failed?
Has anyone any resources I could look at?
How'd you do it?
The server side portion of code should spawn or communicate with a process that lives outside the web server. Using web page code to run tasks that should be handled by a daemon is just sloppy work.
You can't expect them to hang around for 20 minutes. Even the most cooperative users in the world are bound to go off and do something else, forget, and close the window. Allowing such long connection times screws up any chance of a sensible HTTP timeout and leaves you open to trivial DOS too.
As Spencer suggests, use the first request to start a process which is independent of the http request, pass an id back in the AJAX response, store the id in the session or in a DB against that user, or whatever you want. The user can then do whatever they want and it won't interrupt the task. The id can be used to poll for status. If you save it to a DB, the user can log off, clear their cookies, and when they log back in you will still be able to retrieve the status of the task.
Session are not that realible, I would probably design some sort of tasks list. So I can keep records of tasks per user. With this design I will be able to show "done" tasks, to keep user aware.
Also I will move long operation out of the worker process. This is required because web-servers could be restrated.
And, yes, I will request status every dozens of seconds from server with ajax calls.
You can have JS timer that periodically pings your server to see if any jobs are done. If user goes away and comes back you restart the timer. When job is done you indicate that to the user so they can click on the link and open the report (I would not recommend forcefully load something though it can be done)
From my experience the best way to do this is saving on the server side which reports are running for each users, and their statuses. The client would then poll this status periodically.
Basically, instead of checkStatusOf(int session), have the client ask the server of getRunningJobsFor(int userId) returning all running jobs and statuses.

Categories