I am trying to work out what is the best and most efficient way of storing authentication token to get instant access next time I make a request.
Usually my initial request to obtain authentication token takes longer (1.5 seconds). Which means I can save a lot of time if I do that only once. All requests are done via CURL library (in my case its Guzzle but that does not matter).
My application is written in PHP and my options are
Storing token using Redis and access it quickly - it should be quick as it Redis works "in memory"
MySql - it is installed already and I may just need to create additional table
Any other ideas?
Of course whenever I store authentication I am going to encode it and request for a new token once a day (not sure whats current ttl for the token but it doesnt matter).
I do care about access time (the quicker "read" the better)
Related
I have to develop a RESTful API for an Android app and have decided to go with PHP using the slim framework.
Some background, currently every time the client app makes a request, the server does some DB operations and creates a payload. This was causing high load on the server during peak hours of app use. So I'm looking for a way to cache this payload and have it available whenever the request comes. This cache will have to updated rarely compared to the number of reads (Only on DB change by admin).
To test this I tried the following code,
In index.php
$app->flag = 1;
And the endpoint
$app->get("/getContent", function() use ($app){
if($app->flag == 1){
echo 'Changing value';
$app->flag = 0;
return;
}
echo $app->folder;
});
The ideal case would be if it prints "Changing value" first time and 0 thereafter. But the value of '$app->flag' is always 1, at the start of the endpoint. How can I persist the data between successive calls to the end point?
Also, would it be better if I store the payload in a file each time and do an I/O to handle the endpoint request (Will this throw an I/O exception if the admin tried to update the file while the endpoint is reading it for the client)?
I'm fairly new to PHP, will really appreciate your insight or even other ideas to do the same from you.
Here are my suggestions:
Can you use memcache for storing the data? You do the DB operations and processing once, then build json and store it in memcache (which stores data directly in memory, and as a plus you would avoid I/O operations)
Do all the DB processing, build json and save it in another table (used for caching results). The main concern with this approach is the number of users hitting your DB at once. Although it will be simple reads and for sure the query result will be cached by DB engine natively you must consider the number of DB connections used at a time
My application is a full AJAX web page using Codeigniter Framework and memcached session handler.
Sometimes, it sends a lot of asynchronous calls and if session has to regenerate its ID (to avoid session fixation security issue), the session cookie is not renewed fast enough and some AJAX calls fail due to session id expired.
Here is a schematic picture I made to show clearly the problem :
I walked across the similar threads (for example this one) but the answers doesn't really solve my problem, I can't disable the security as there is only AJAX calls in my application.
Nevertheless, I have an Idea and I would like an opinion before hacking into the Codeigniter session handler classes :
The idea is to manage 2 simultaneous session Ids for a while, for example 30 seconds. This would be a maximum request execution time. Therefore, after session regeneration, the server would still accept the previous session ID, and switch to session to the new one.
Using the same picture that would give something like this :
First of all, your proposed solution is quite reasonable. In fact, the people at OSWAP advise just that:
The web application can implement an additional renewal timeout after which the session ID is automatically renewed. (...) The previous session ID value would still be valid for some time,
accommodating a safety interval, before the client is aware of the new
ID and starts using it. At that time, when the client switches to the
new ID inside the current session, the application invalidates the
previous ID.
Unfortunately this cannot be implemented with PHP's standard session management (or I don't know how to do that). Nevertheless, implementing this behaviour in a custom session driver 1 should not pose any serious problem.
I am now going to make a bold statement: the whole idea of regenerating the session ID periodically, is broken. Now don't get me wrong, regenerating the session ID on login (or more accurately, as OSWAP put it, on "privilege level change") is indeed a very good defense against session fixation.
But regenerating session IDs regularly poses more problems than it solves: during the interval when the two sessions co-exist, they must be synchronised or else one runs the risk loosing information from the expiring session.
There are better (and easier) defenses against simple session theft: use SSL (HTTPS). Periodic session renewal should be regarded as the poor man's workaround to this attack vector.
1 link to the standard PHP way
your problem seems to be less with the actual speed of the requests (though it is a contributing factor) but more with concurrency.
If i understand right, your javascript application makes many (async) ajax calls - fast (presumably in bursts)- and sometimes some of them fail due to session invalidation due to what you think is speed of requests issue.
Well i think that the problem is that you actually have several concurrent requests to the server, while the first one has its session renewed the other essentially cannot see it because the request is already made and waits to be processed by the server.
This problem will of course manifest itself only when doing several requests for the same user simultaneously.
Now The real question here - what in your application business logic demands for this?
It looks to me that you are trying to find a technical solution to a 'business' problem. What i mean is that either you've mis-interpreted your requirements, or the requirements are just not that well thought/specified.
I would advice you to try some of the following:
ask yourself if these multiple simultaneous requests can be combined to one
look deeply into the requirements and try to find the real reason why you do what you do, maybe there is no real business reason for this
every time before you fire the series of requests fire a 'refresh' ajax request to get the new session, and only on success proceed with all the other requests
Hope some of what i've wrote help to guide you to solution.
Good luck
I'm using Laravel 4 for building my one-page app and I need to implement a session timeout for the user to be redirected as soon as it is detected, I've been trying to check the $_SESSION/Session::exists() array through some polling requests but everytime I hit a route the session is refreshed.
How can I implement polling for session info on Laravel effectively? Do I need to do something more complicated like keeping an open connection (Websockets/Long pooling)?
I feel like this should be an out-of-the-box feature but strangely no-one seems to implement it, is it because most of the implementations are page-to-page instead of one-page + ajax?
That's a funny problem and you should use a middleware for that. If you're in laravel 4.1 or above laravel uses StackPHP
Check this link from fideloper, it might be useful.
Just set/update session variable(defined by you) in the middleware and create route that doesn't use the middleware in your API to query that variable.
As far as I know it is not there in Laravel out of the box but it's actually easy to implement. Just an example: you could store the time the user logged in in a session variable with Session::put('logintime', time()); and then check if there has been a timeout.
Example (with a 15 min timeout):
function isTimeout() {
return !Session::has('logintime') || Session::get('logintime') + (15 * 60) <= time();
}
Then you can use it in a response to an AJAX request like you need to.
It might be a long shot due to knowledge boundaries for some, but I do below for real time data displays in my applications, and it's worth the effort of getting started with NodeJS (and it's easier than people think as full stack PHP developers are already familiar with JS, highly recommend going into MEAN stack)
I write core functionality in a PHP framework, and for anything that I need to display or interact the user with in real time, instead of polling or using php with websockets, I introduce an extra nodejs nginx server and serve the data using socket.io, which is good because it keeps connections to your db to a minimum (ie hence avoiding any max connection problems in Mysql) and is super scaleable, as instead of polling it uses observable pattern, keeping all client connections in an array and pushing the new data when observer sees changes in your data persistence layer, instead of keeping your server super busy with gazillions of naggy clients polling your db all the time.
If you haven't done it, I also recommend dropping apache for your PHP application servers and look into Nginx with php fpm
I am creating a HTML5/WebGL based game and am getting a little stuck when thinking about saving game data to the server.
I need to save the data with out a page load. So the obvious choice is to use a AJAX call to my servers Restful API.
Obviously this presents a few issues. Mainly spoof requests. Using AJAX calls will mean the request is being made client side, allowing "bad" users to send their own request to the server altering the data to benefit themselves.
I first thought to secure the server using sessions. On the initial page load, store a session allowing access to the API. Though I am sure sessions can be spoofed.
How could I best achieve saving game data to a server safely?
You could expand on this simple example;
<?php
$secret = "test123";
$time = time();
$code = md5($secret.$time);
The secret and time are combined and hashed together. The resulting code and the time are sent to the server;
http://www.domain.com?TIME=1362668370&CODE=cdade9f3df6a8cb9b17c736b96c64133
The server can then hash the secret and received time value and compare that with the received CODE.
I am trying to find a solution to prevent race conditions in my application logic (specifically when renewing an OAuth access token) and my back-end database happens to be mongodb.
Coming from a MySQL background, I'm used to using GET_LOCK and it's related functions to handle blocking in PHP. Does Mongo have any analog to MySQL's GET_LOCK function, or will I have to use PHP's file locking or something similar?
Is flock() a good (or proper) alternative for this situation, or is that meant only for use when reading and writing to files?
Edit:
The race condition I am trying to prevent is the following:
Instance A notices OAuth access token nearing expiration
Instance B notices OAuth access token nearing expiration
Instance A requests refreshed OAuth access token from remote server and obtains one
Instance B requests refreshed OAuth access token from the same server and is rejected (server potentially invalidates access token from step 3 as security precaution)
Instance A saves result back to database
Instance B saves result back to database
If you want to simulate a named mutex or lock using MongoDB, I would suggest using findAndModify by creating a special collection for it and having a document, you can even call it db.my_lock.
db.my_lock.save({"IMREFRESHINGAUTHTOKEN":false});
Now, between steps 2 and 3 add a findAndModify to grab the "lock":
db.my_lock.findAndModify(
query: {"IMREFRESHINGAUTHTOKEN":false},
update: {$set: {"IMREFRESHINGAUTHTOKEN": true}, ...}
);
If you get to the "lock" first, you will get back this object (and you will get to set the first field to true - I recommend setting a second field with timestamp or connection number or process ID or some other identifier which will allow cleaning up after a crashed process so it won't hold a lock forever).
If you "lose" the race you will get back nothing that matches "IMREFRESHINGAUTHTOKEN":false and you'll know you lost the race and give up (or check the timestamp of the lock and maybe try to see if it's old and stale).
This describes a stand-alone single lock on the whole "collection" - of course you can implement this as an extra field on the stored OAuth token and have as many of them being refreshed at a time as there are threads noticing they are expiring.
Hope this helps.