Protect from replay attacks when using request signatures in secure API communication? - php

I've been reading up on API communication securities and trying to figure out the best way to build a secure API. I know that OAuth and such exist, but I'm also trying to educate myself in the process and not rely on libraries.
Basically I have a Web Service and in that web service users can register for API. They will be provided a Profile ID and secret key which they have to use to build the API request from another web system.
API request is built similarly to the way banks do it, all input data sent to API has to be sorted, hash calculated and then the hash sent to the server, like this:
// Profile data
$apiProfile='api123';
$apiSecret='this-is-a-good-day-to-be-a-secret-key';
// Input
$input=array();
$input['name']='Thomas Moore';
$input['profession']='Baker';
// To ensure that the order of variables checked and received is the same on both ends:
ksort($input);
// Using serialize() for simplifying things
// http_build_query() is another option, or just placing values in order
$input['hash']=sha1(serialize($input).$apiSecret);
// Making a request to URL:
// Using file_get_contents() as an example, would use cURL otherwise
$result=file_get_contents('http://www.example.com/api.php?'.http_build_query($input));
// SERVER CALCULATES COMPARISON HASH BASED ON KNOWN SECRET KEY AND INPUT DATA
This is really good and works. But! My problem is the potential replay attack. If someone snatches this request URL, they can send it to the server again, even though they cannot change the data itself.
Now I've read some things about it that you should also either check the time or add a one-time-use token to the request, but I am unsure how exactly should I do that? Is sending a timestamp with the request really secure enough? (Receiving server would make sure that the request has originated few seconds within the time the request was made, if the clocks are somewhat in sync).
I could also add IP validations to the mix, but these can change and can be spoofed somewhat and are more of a hassle for the user.
I would love this one-time-token type of system, but I am unsure how to do this without exposing token generation to the exact same replay attack problem? (Last thing I need is allowing to give out secure tokens for middle-men).
Opinions and articles would be really welcome, I've been unable to find material that answers my specific concerns. I want to say that my API is secure, without it being just marketing speak.
Thank you!

You need to only allow token exchange via a secure channel (https), and you should have a unique hash per message. Include things like a timestamp and the ip of the client. If you don't use https, you are vulnerable to a firesheep-style attack.
Other than that, you are doing the token generation and exchange correctly.

Sending the time (and including it into the cache) is really an option.
The other option would be 2-phase algorithm when you first request for the session token or a session key, then use it for the session, and its TTL is stored on the server (which can be time or number of requests allowed)
As for the session keys idea look at schemes like http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange
Example of 1-time token algorithm:
1) client composes a request for the 1-time token, signs this request with the secret key and sends it to the server.
2) server generates the key, signs it with the same key and sends it to the client (together with the signature)
3) client verifies the token using the secret key
4) client composes the request, including the token, and signs the whole request body with the secret key, then sends to the server
5) server checks whole body integrity and the token validity, then sends the response (again it can be signed with the secret key for integrity and authorship verification)

Related

REST API authentication: how to prevent man-in-the-middle replays?

I am writing a REST API and would like to implement an authentication system similar to AWS.
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
Basicly, on AWS the client encrypts the Authorization header with some request data using a secret key that is shared between client and server. (Authorization: AWS user: )
The server uses the key to decrypt the header using the shared key and compare to the request data. If successful, this means the client is legit (or at least is in possession of a legitimate key).
The next step can be to execute the request or, preferrably, send the client a unique, time-based token (ex.: 30 minutes) that will be used on the actual request (added to a Token header, for example). This token cannot be decrypted by the client (uses a server-only key).
On next requests, the server checks the token (not Authorization anymore) and authorizes the request to be executed.
However, is it possible to have a man-in-the-middle, even on SSL-encrypted connections, that replays these token-authenticated requests? Even if the MITM does not know what's inside the message, he/she could cause damage for example by ordering a product many times. If the server receives a replayed message and the token is still within the valid timestamp, the server will assume this is a valid request and execute it.
AWS tries to solve this with a timestamp requirement:
A valid time stamp (using either the HTTP Date header or an x-amz-date
alternative) is mandatory for authenticated requests. Furthermore, the
client timestamp included with an authenticated request must be within
15 minutes of the Amazon S3 system time when the request is received.
If not, the request will fail with the RequestTimeTooSkewed error
code. The intention of these restrictions is to limit the possibility
that intercepted requests could be replayed by an adversary. For
stronger protection against eavesdropping, use the HTTPS transport for
authenticated requests.
However, 15 minutes is still enough for requests to be replayed, isn't it? What can be done to prevent replay attacks in this scenario? Or am I overthinking and a certain degree of uncertainty is acceptable if you provide enough mechanisms?
I am thinking about requiring the client to add a unique string on each request body. This string will be transport-encrypted and unavailable to MITM for modification. On first receipt, the server will record this string and reject any new requests that contain the same string in the same context (example: two POSTS are rejected, but a POST and a DELETE are OK).
EDIT
Thanks for the info. It seems the cnonce is what I need. On the wikipedia diagram it seems the cnonce is only sent once, and then a token is generated, leaving it open to reuse. I guess it is necessary to send a new cnonce on every call with the same token. The cnonce should be included on the body (transport-protected) or shared-key-protected and included on a header. Body-protection seems the best (with obvious SSL) since it avoids some extra processing on both sides, but it could be shared-key-encrypted and included on a header (most likely prepended to the temp token). The server would be able to read it directly on the body or decrypt it from the header (extra processing).
A Cryptographic nonce, the unique string you mention, is indeed a good security practice. It will prevent requests to be reused. It should be unique for each petition, independently of their nature.
Including a timestamp and discarding all petitions made past a certain expiration date is also a good practice. Keeps the used nonce registry short and helps preventing collisions.
The nonce registry should be associated to a user, to also prevent collisions. And consumers should use cryptographically secure pseudorandom number generators.
If a predictable seed for the pseudorandom number generator is used, such as microtime, two nasty things can happen.
The nonces may become predictable. Though if the
communication is encrypted this is less of an issue, as they will
not be able to modify the request and thus not be able to tamper the nonce.
Legitimate requests by the same user might be discarded. For
instance, if two servers sharing the authentication key try to do
two different "post" actions concurrently, nonces may collide.

HMAC Implementation for Web Service Authentication in PHP

I am trying to implement a web service and need some (very) simple Authenticate to restrict access to the service.
I found out about HMAC and I think I understand how to implement it. But I have a couple of questions in mind.
Let's say I have this HTML Form on the consumer side. When making a GET/POST request to my server.
Is is enough to create a hash of: public_key using the secret_key?
OR, do I need to create a hash of the entire POST variables/array?
I'm thinking it would be enough to send the hash of the public_key only but just wanted to make sure and ask you guys.
I am planning to do this:
Create a hash of the public_key
Put the hash in a hidden field or in the URL as a param together with the public_key (or client_id) and other POST/GET variables.
Receive on my server and verify the hash against the database by recreating the hash of the public_key using the secret_key.
If the hash matches, I accept the POST/GET requests.
Your thoughts?
Clarification: public_key is like the client unique id where I can use to identify what secret key to use to generate the hash on the server.
The pubkey is just used as an alternative way to recognize the user. It could be the user email as well, by the way since you don't likely want to expose your user data to their programmer (or to potential sniffers) you create a unique identifier for each user. It's all it means. Then you need a private key to sign your hash.
Of course to make it worth it you have to sign all unique request data, otherwise someone could alter your request body and you wouldn't be able to detect it (MITM attack).
You also should care of creating a timestamp that must be included in the HMAC itself, then pass it alongside with the request. This way you can make the signature expirable and so you are not exposed to replay attacks (someone steals the request and without modifying it replies it against the server, operating multiple times the same action... think what a problem if it's a request to pay for your service, your user would be very very angry with you).
Also remember (nobody does) to encrypt also the Request-URI inside the HMAC itself and also the HTTP method (aka verb) if you're using a RESTful webservice, otherwise malicious users will be able to send the request to other URIs or (using RESTful services) change the meaning of your request, so a valid GET can become a potential DELETE.
An example could be: user wants to see all its data, makes a GET request, a Man in the Middle reads the request and changes GET with DELETE. You are not given the opportunity to detect that something has been changed if it's not inside your HMAC you can check about, so you receive a DELETE request and boom! you destroy all user data.
So always remember: everything is essential to your request must be validable
And if you rely on a HMAC then you must encrypt everything you need to trust the request.
Also always remember to start designing your system by denying all request, then if you can validate them perform requested actions. This way you always fall back on denied requests. It's better to have a user email telling you that he cannot do something that have your user data propagated on the net.
Use TLS. It fixes this and a host of problems you haven't even thought of yet.

Using an API Key System

I would like to implement an API key system to secure API calls to my app.
The way I think will work is my having a private key/secret per account. Each request contains the time, account id and a hash(time+secret).
The server can then do the same thing with the users secret from the database and check that against the hash the client sent.
Is this a reasonable way to do it? It is open to a brute force attack, but I'm thinking that as long as the secret is long (ie uuid) it shouldn't be too much of a problem...
A Thought
Any one could submit another request with the same time and hash and have it accepted, after all its valid, right?
The problem being that the nonce + hash can be replayed. A real authentication protocol requires at least two messages:
Server Client
---->challenge --->
<----response------
For example, the challenge could be the nonce, supplied by the server, and the client's response would be the hash of password with the nonce.
Unfortunately, this requires state, and the whole problem with RESTful protocols is that they do not want the hassle of keeping state. And yet they want to authenticate...
So you really have three options:
Option 1: Pretend the problem does not exist, and use the stateless "authentication" protocol. This is no different from using a cookie. The nonce + password-hash is no more secure than a cookie. Cookies can be stolen, etc, and replayed. The entire web is now plagued by these replay attacks.
Option 2: Try to bolt an authentication protocol onto a stateless communication method. Here, you would have the client send you a UTC time-stamp instead of a nonce. The use of the time-stamp provides limited defense against replay. Obviously your clock is not going to be synched with that of the client, so your server will allow any timestamp within some error margin, and that error margin will be the replay margin of the authentication protocol. Note that this violates REST, because the authentication message is not idempotent. Idempotent implies "can be successfully replayed by an attacker".
Option 3: Do not try to bolt an authentication protocol onto a stateless protocol. Use SSL. Use client certificates. Instead of having the client download a string, let them generate a certificate, or you can supply them with a key-pair. They authenticate via SSL and do not authenticate in your REST layer. SSL has lots of "overhead". It is not lightweight, precisely because it does address these replay issues.
So at the end of the day, it depends on how much you value access to your APIs.
For APIs that only retrieve data (other than private data), rather than create, modify, or delete data,
option 1 in this answer
may be adequate. See, for example, the Bing Maps REST API and Google
Maps Premier web services (where here, Google Maps also hashes the URL with a digital signature
and a special key known only to the API user, which, while providing protection against modifying
the URL, apparently still doesn't provide replay attack protection).
In fact, some APIs that retrieve data do not use an API key, but rather limit access in other ways (for example, the YouTube API allows retrieving publicly available data on videos and users' channels without requiring authentication, but limits the number of recent requests).
Options 2 and/or 3 are required for APIs that do more than just retrieve publicly-available data, for instance, if it modifies user profiles, posts content, or accesses private information: see for example, the YouTube data API authentication page, where OAuth is mentioned as one possible authentication scheme.
Especially for option 1, the API key here is used in order to track access by users to your API, and most importantly, limit access by those users. Option 1 may not be appropriate for APIs that allow unlimited data access.
(This is an answer since it's too long to be a comment.)
Server contains:
username
password hash
Client sends:
username
random string
hash of (password hash + random string)
When clients calls server, server creates hash of password hash (which it knows itself) + random string (given in GET by calling client) eand evaluates if that matches the hash (given in GET by calling client)
Even better would be to create 1 function that generates a secret hash from (password hash + nonce) where "nonce" (something random) is also stored on server. Then make it possible to call the server once with username + password, which returns the secret hash; then have subsequent calls solely depend on username + random string + hash of (secret hash + random string) with the same methodology as described above, but secret being what was then password.
This way, even if your secret would be intercepted and reversed, your pass would still be safe.
And obviously, good hashing algorithms: no rot13 and even solely md5 is questionable.

Securing a javascript client with hmac

I am researching ways to secure a javascript application I am working on. The application is a chat client which uses APE (Ajax Push Engine) as the backend.
Currently, anyone can access the page and make a GET/POST request to the APE server. I only want to serve the chat client to registered users, and I want to make sure only their requests will be accepted. I can use username/password authentication with PHP to serve a user the page. But once they have the page, what's to stop them from modifying the javascript or letting it fall into the wrong hands?
This method for securing a client/server application looks promising: http://abhinavsingh.com/blog/2009/12/how-to-add-content-verification-using-hmac-in-php/
I have another source that says this is ideal for a javascript client since it doesn't depend on sending the private key. But how can this be? According to to the tutorial above, the client needs to provide the private key. This doesn't seem very safe since anyone who has the javascript now has that user's private key. From what I understand it would work something like this:
User logs in with a username and password
PHP validates the username and password, looks up the user's private key and inserts it into the javascript
Javascript supplies a signature (using the private key), and the public key with all APE requests
APE compares the computed signature to the received signature and decides whether to handle the requests.
How is this secure if the javascript application needs to be aware of the private key?
Thanks for the help!
The answer: You technically cannot prevent the user from modifying the JavaScript. So don't worry about that because you can do nothing about it.
However, the attack you do need to prevent is Cross-Site Request Forgery (CSRF). Malicious scripts on different domains are capable of automatically submitting forms to your domain with the cookies stored by the browser. To deal with that, you need to include an authentication token (which should be sufficiently random, not related to the username or password, and sent in the HTML page in which the chat client resides) in the actual data sent by the AJAX request (which is not automatically filled in by the browser).
How is this secure if the javascript application needs to be aware of the private key?
Why not? It's the user's own private key, so if he is willing to give it out to someone else, it's his problem. It's no different from giving out your password and then saying someone else has access to your account.
If you think about this a bit, you'll realize that you don't need to implement public-key encryption, HMAC or anything like that. Your normal session-based authentication will do, provided the communication channel itself is secure (say using HTTPS).
HMAC authentication is better served for an API that third parties are going to connect to. It seems like your app would be better served by writing a cookie to the client's browser indicating that they've been authenticated. Then with each ajax request you can check for that cookie.
Edit: I take back a bit of what I said about HMAC being better served for third party APIs. Traditionally with HMAC each user gets their own private key. I don't think this is necessary for your application. You can probably get away with just keeping one master private key and give each user a unique "public" key (I call it a public key, but in actuality the user would never know about the key). When a user logs in I would write two cookies. One which is the combination of the user's public key + time stamp encrypted and another key stating what the time stamp is. Then on the server side you can validate the encrypted key and check that the time stamp is within a given threshold (say 10-30 minutes in case they're sitting around idle on your app). If they're validated, update the encrypted key and time stamp, rinse and repeat.

Protect HTTP request from being called by others

I have an Android application from which I want to upload some data to a database on my web server. As the MySql java library has a size of about 5 mb, I don't want to include it with the application.
So I'll make a HTTP request for a php script and send the data with the URL as parameters. How do I make sure that only I can call this? I don't want people to sniff up the URL and call it outside my application.
Thanks
Use a simple static token to identify the client is yourself or in an advance way, first authenticate with a username/password, generate a token and use this token for further transactions .This token can expire after some time.
option1: http://[your request url]&key=xyz
where xyz is known only to you
option 2: first ping server with username password and upon successful validation get a dynamic token [dKey], store it locally.
then for further requests.
http://[your request url]&key=dKey.
option 2 is the one normally being followed.
The short answer: you cannot prevent sniffing.
But you can make sniffer's life harder by implement a some sort of internal authentication, GET/POST predefined parameters (or dynamic, but calculated by algorithm you only know how) exchange, hidden header fields, etc.
But all this could also be sniffed/reverse engineered.
A possible Overkill Way would be using some sort of asymmetric private/public key encryption/signature. Such as RSA. Your app will only include public key, and sign the request data with it. And your server-side will have a secret private key, it will use it to check validity of client requests.
I know very little about android - but it's not really relevant to the question.
If you want to prevent someone from sniffing the URL (and authentication details?) then the only option is to use SSL. On the other hand if you merely want to prevent other people from accessing the URL, its simply a question of authentication. If you're not using SSL, then that means you need to use sessions and a challenge-based authentication to avoid people sniffing the traffic. You could do this via digest authentication or roll your own code.
C.

Categories