In the case of an authentication token the client sends the credentials, receives a token and uses this in all subsequent calls. The server needs to save the token in order to validate the requests.
With for example PHP sessions the server returns a session UID which the client sends in every request. The server needs to save the session.
In both cases the server needs to save a state so why is the former considered stateless?
Semantics. A session is generally implemented by assigning each user a unique ID and storing that ID in a client-side cookie. The auth token would be EXACTLY the same thing - a unique per-user ID of some sort. Cookies are sent back on every request automatically by the browser, and the auth token CAN be sent back on every request, but generally should be sent only on the requests that actually require authentication.
The difference has to do with how those tokens are treated on the server. The session ID is used to load up a corresponding session from storage (e.g. a file, a db record, whatever), and that session data will be persisted between requests.
An auth token doesn't have any associated session file. It's more just a "I'm allowed to be here, and here's the proof".
There's no reason a session ID can't be used to implement the authentication system. Even a simple $_SESSION['logged_in'] = true would turn the session into an authentication system.
If the expectation is that the client will always send an authentication token for every request, the server actually doesn't need to save state. It has everything it needs in the message to determine how to evaluate the request.
Certain server architectures (I'm thinking Java servlets, specifically) are required to return a session cookie, but they aren't required to use it when it's passed back to them on the next request. In my stateless servlet application, a different cookie representing the JSESSIONID is returned for every response. So, in this case, it's just background noise.
Most sessions are stateful for two reasons:
the identifier passed by the client can't be parsed into a meaningful set of credentials without having had previous server interaction (they are usually just a random value assigned by the server)
the identifier has an implicit lifespan that isn't discoverable within the identifier
First of all, what you described in this question is a session management, not token management.
SessionIds are generated by business system itself. The workflow is same as your question.
While tokens are usually generated and managed by an independent system, not the business system. When client sent subsequent calls to the business server after it had got a token already, the business server had to validate the token from the token system. So when we talk about the business system, we say it is stateless.
Additionally, in my point of view, token is not born to handle authentication, it is designed to keep important information secure.
Reference:
PCI DSS tokenization
Redhat token management system
Related
I'm starting to develop a simple PHP RESTful API. After reading some tutorials, one of the features about REST is that the
"... statelessness is key. Essentially, what this means is that the
necessary state to handle the request is contained within the request
itself, whether as part of the URI, query-string parameters, body, or
headers"
Therefore, it means that my PHP server won't need to have $_SESSION ? What kind of approach do you recommend ? Using a token (valid for a short limit of time) in the URL doesn't seem a bit unsecure?
For example www.myapi.com/1233asdd123/get_user/12.
Many thanks.
If you're a web developer of any sort, you'll have heard this sentence probably 1,000 times: "HTTP is a stateless protocol". This means that every session works with a token exchanged between the server and the client.
When you use PHP's built-in sessions, the server is actually doing exactly that, even if you don't realize it: it generates a session_id and passes it to the client. The client passes the session_id token back normally on a cookie; PHP allows including the session token also on the URL, as a GET parameter, but I personally recommend disabling that feature (disabled by default on PHP 5.3+).
In your case, yes, you won't be using PHP's sessions.
You create a table in your database storing all session tokens and the associated session.
Tokens should have a short lifespan (for example, 30 minutes) and should be refreshed frequently. Refreshes are important not only to extend the life of the session (every refresh gives you an extra 30 minutes or so), but also help fighting against thefts of the session key. In some REST servers we created, the session token lives for 30 minutes and users are given a new token on the first request made after 10 minutes the session started. When clients are sent a new token, the old one is invalidated immediately.
You could pass the token to the server in any way, but adding it as a GET parameter is not an ideal solution for two reasons: 1. GET parameters are often written in the access logs of the servers and 2. users often copy/paste URLs and share them, and that can expose their session token.
For API servers, the best approach is to include the session token in one of the headers of the HTTP request. For example, you could set your Authorization header: Authorization: SessionToken 123123123 where 123123123 is your token and SessionToken is a string to tell the server to use your authorization method (you're free to choose your own name, as long as it's not one of the default methods like Basic; be consistent, though!).
Security on API servers is normally obtained by using SSL. Basically, if you have an API server, you must protect it with HTTPS (SSL).
There are methods to achieve security also without using SSL, but they require signing each request and are really complicate to implement and to use - and the overhead they add is probably bigger than the one of SSL.
A very common practice is to use a key value inside the URL:
www.myapi.com?key=ABCD1234
It is not less/more secure than POSTing the string. SSL encryption ensures that the whole payload cannot be intercepted by a man-in-the-middle.
More info :
You also mentioned a temporary access (session token). It is common in systems to log in using credentials (like the solution above) and obtain a temporary session token in the response. The session token is then used instead of the login details to query the service. It reduces the exposition of credentials in case of interception, and if someone manages to steal the session token, it will be working only for a few minutes. Good to have although not a necessity.
I've been making a standardized JSON API for my company's website; I'd like to have a way of authenticating users to use the API while keeping the API as stateless as possible. My idea was the following:
The user logs in, the web service authenticates the user, and generates a random string that gets passed to the client, along with an expiration date. This is stored in a cookie, as well as an entry in the database.
For every API request, the web service checks the cookie string against the database entry. If it authenticates, the web service will generate a new string, replace the old string with the new one in the database and the cookie, then send back the requested information.
If the client sends a request and the cookie does not match the database entry, the string in the database is set to NULL and the client has to log in again and start the process from step 1.
If a request is sent after the expiration date, the string in the database is set to NULL and the user must log in again.
I want to cause as few disruptions as possible with my company's current setup as we slowly transition to newer technology. Is this method something that is commonly done? What kind of security issues would I be running into if I do it this way? Should I be using a different method?
Yes, this is a common scenario. What you are describing is a session cookie and is widely used.
You might want to read into Session Fixation techniques and ways to mitigate those.
But using a session is not really stateless. If you can provide keys (shared secrets) to your API consumers, you could also consider message signing to authenticate requests. Make sure you are using (H)MAC. Also make sure that you guard yourself from Replay Attacks.
I am writing a REST API and would like to implement an authentication system similar to AWS.
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
Basicly, on AWS the client encrypts the Authorization header with some request data using a secret key that is shared between client and server. (Authorization: AWS user: )
The server uses the key to decrypt the header using the shared key and compare to the request data. If successful, this means the client is legit (or at least is in possession of a legitimate key).
The next step can be to execute the request or, preferrably, send the client a unique, time-based token (ex.: 30 minutes) that will be used on the actual request (added to a Token header, for example). This token cannot be decrypted by the client (uses a server-only key).
On next requests, the server checks the token (not Authorization anymore) and authorizes the request to be executed.
However, is it possible to have a man-in-the-middle, even on SSL-encrypted connections, that replays these token-authenticated requests? Even if the MITM does not know what's inside the message, he/she could cause damage for example by ordering a product many times. If the server receives a replayed message and the token is still within the valid timestamp, the server will assume this is a valid request and execute it.
AWS tries to solve this with a timestamp requirement:
A valid time stamp (using either the HTTP Date header or an x-amz-date
alternative) is mandatory for authenticated requests. Furthermore, the
client timestamp included with an authenticated request must be within
15 minutes of the Amazon S3 system time when the request is received.
If not, the request will fail with the RequestTimeTooSkewed error
code. The intention of these restrictions is to limit the possibility
that intercepted requests could be replayed by an adversary. For
stronger protection against eavesdropping, use the HTTPS transport for
authenticated requests.
However, 15 minutes is still enough for requests to be replayed, isn't it? What can be done to prevent replay attacks in this scenario? Or am I overthinking and a certain degree of uncertainty is acceptable if you provide enough mechanisms?
I am thinking about requiring the client to add a unique string on each request body. This string will be transport-encrypted and unavailable to MITM for modification. On first receipt, the server will record this string and reject any new requests that contain the same string in the same context (example: two POSTS are rejected, but a POST and a DELETE are OK).
EDIT
Thanks for the info. It seems the cnonce is what I need. On the wikipedia diagram it seems the cnonce is only sent once, and then a token is generated, leaving it open to reuse. I guess it is necessary to send a new cnonce on every call with the same token. The cnonce should be included on the body (transport-protected) or shared-key-protected and included on a header. Body-protection seems the best (with obvious SSL) since it avoids some extra processing on both sides, but it could be shared-key-encrypted and included on a header (most likely prepended to the temp token). The server would be able to read it directly on the body or decrypt it from the header (extra processing).
A Cryptographic nonce, the unique string you mention, is indeed a good security practice. It will prevent requests to be reused. It should be unique for each petition, independently of their nature.
Including a timestamp and discarding all petitions made past a certain expiration date is also a good practice. Keeps the used nonce registry short and helps preventing collisions.
The nonce registry should be associated to a user, to also prevent collisions. And consumers should use cryptographically secure pseudorandom number generators.
If a predictable seed for the pseudorandom number generator is used, such as microtime, two nasty things can happen.
The nonces may become predictable. Though if the
communication is encrypted this is less of an issue, as they will
not be able to modify the request and thus not be able to tamper the nonce.
Legitimate requests by the same user might be discarded. For
instance, if two servers sharing the authentication key try to do
two different "post" actions concurrently, nonces may collide.
I've been reading up on API communication securities and trying to figure out the best way to build a secure API. I know that OAuth and such exist, but I'm also trying to educate myself in the process and not rely on libraries.
Basically I have a Web Service and in that web service users can register for API. They will be provided a Profile ID and secret key which they have to use to build the API request from another web system.
API request is built similarly to the way banks do it, all input data sent to API has to be sorted, hash calculated and then the hash sent to the server, like this:
// Profile data
$apiProfile='api123';
$apiSecret='this-is-a-good-day-to-be-a-secret-key';
// Input
$input=array();
$input['name']='Thomas Moore';
$input['profession']='Baker';
// To ensure that the order of variables checked and received is the same on both ends:
ksort($input);
// Using serialize() for simplifying things
// http_build_query() is another option, or just placing values in order
$input['hash']=sha1(serialize($input).$apiSecret);
// Making a request to URL:
// Using file_get_contents() as an example, would use cURL otherwise
$result=file_get_contents('http://www.example.com/api.php?'.http_build_query($input));
// SERVER CALCULATES COMPARISON HASH BASED ON KNOWN SECRET KEY AND INPUT DATA
This is really good and works. But! My problem is the potential replay attack. If someone snatches this request URL, they can send it to the server again, even though they cannot change the data itself.
Now I've read some things about it that you should also either check the time or add a one-time-use token to the request, but I am unsure how exactly should I do that? Is sending a timestamp with the request really secure enough? (Receiving server would make sure that the request has originated few seconds within the time the request was made, if the clocks are somewhat in sync).
I could also add IP validations to the mix, but these can change and can be spoofed somewhat and are more of a hassle for the user.
I would love this one-time-token type of system, but I am unsure how to do this without exposing token generation to the exact same replay attack problem? (Last thing I need is allowing to give out secure tokens for middle-men).
Opinions and articles would be really welcome, I've been unable to find material that answers my specific concerns. I want to say that my API is secure, without it being just marketing speak.
Thank you!
You need to only allow token exchange via a secure channel (https), and you should have a unique hash per message. Include things like a timestamp and the ip of the client. If you don't use https, you are vulnerable to a firesheep-style attack.
Other than that, you are doing the token generation and exchange correctly.
Sending the time (and including it into the cache) is really an option.
The other option would be 2-phase algorithm when you first request for the session token or a session key, then use it for the session, and its TTL is stored on the server (which can be time or number of requests allowed)
As for the session keys idea look at schemes like http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
Example of 1-time token algorithm:
1) client composes a request for the 1-time token, signs this request with the secret key and sends it to the server.
2) server generates the key, signs it with the same key and sends it to the client (together with the signature)
3) client verifies the token using the secret key
4) client composes the request, including the token, and signs the whole request body with the secret key, then sends to the server
5) server checks whole body integrity and the token validity, then sends the response (again it can be signed with the secret key for integrity and authorship verification)
I have a PHP application that relies extensively on sessions. We are now considering building an API for our users. Our initial thoughts are that users will need to authenticate against the api with their email address, password and an API key (unique for each user).
However, as the current application (including the models) relies on user sessions extensively, I am not sure on the best approach.
Assuming that an API request is correctly authenticated, would it be acceptable to:
Start the session for the API call once user is authenticated
Run the models and return json/xml to the user
Kill the session
This means that the session gets instantiated for each API call, and then immediately flushed. Is this OK? Or should we be considering other alternatives?
In my experience of creating APIs, I have found it best that sessions only last for one request and to recreate the session information in each execution cycle.
This does obviously introduce an overhead if your session instantiation is significant, however if you're just checking credentials against a database it should be OK. Plus, you should be able to cache any of the heavy lifting in something like APC or memcache based on a user identifier rather than session reducing the work required to recreate a session while ensuring authentication verified in each request.