Securing a PHP webservice for application access only - php

First of all, a better question would be is this possible? My gut instinct is that it isn't entirely, but there may be some clever ways. Even if they just act as a deterrent, make it slightly harder for some one to hack, or even make it easier for me to detect suspicious activity.
Basically, I'm building a web service using PHP for my C#.NET program to connect to. Among other things, one of the most important purpose the web service serves is verifying license data. The program sends the licence key entered by the user to be checked, and if it is valid the web service will return the Name of the person who purchased the licence key so that the program knows to activate itself.
I am fully aware that there is no perfect anti-piracy scheme and that is my software will be cracked if people want it bad enough. However, I do not believe that there isn't anything I can do to make it very hard for people to crack my software.
I do have an SSL certificate so the program will be communicating with the web service using HTTPS, however that's the only security I have at the moment. I have thought about
Using long and obscure names so that the functions are hard to guess
Using MD5 to disguise the functions
Adding a username and password
Checking the User-Agent
etc.
However, I have read that there are applications available to simply extract strings from programs, which would render those measures completely ineffective. Still, I don't know how technical users have to be to use those applications. Is it still worth adding some of these measures to stop casual piracy? Which measures are the better ones and what will be the most effective?
Thanks in advance

You can distribute your C# application with a certificate bundled and sign your requests with the certificate. The server can then verify if the request was signed by your application and reject any other request.
Edit: Whoops, I only now understood that you want to secure you application even when in the hands of a malicious user. This, I don't think is possible. A hacker can decompile, scan the memory, read and decode files, etc and your certificate will be available in there if you distribute it with the application. An alternative would be to distribute an external security token (hardware device or flash storage) which will need to be plugged-in to the client computer. The token holds the certificate, keys or cyphers used to sign/encrypt your requests and it therefore doesn't stay with the application.
Your server-side SSL certificate will only guarantee that the communication channel is secure and the server is not lying about his identity. It doesn't guarantee anything about the client connecting. To also be sure that the client is identified, you need to use a form of client certificate that your server recognises.

Related

Secure connection between 2 servers

I want to have some user-facing server, say server A. For certain tasks server A should submit a job request on server B. When server B is done, it should write the results to the database or maybe make a call on server A and let server A handle the result (not quite sure about this part right now).
The question is:
how to securely connect server A and server B?
On the endpoints on server B I only want to allow calls from server A.
On server A I want certain end-points to only be available for server B (and maybe at some point some other worker machine server C, D.. etc).
I just don't find the right words to google for this. I suppose there are some really obvious buzzwords for this, but googling "Secure connection between 2 servers" does not yield what I am looking for.
I'm interested in the general name of this, but for the sake of completeness:
The user-facing server will be using php
The job-machines will be some node-apps
My own idea (but I don't know how to exactly implement it), would be something similar to when I connect via ssh to some server, where I just add my pub-key to the server. So it should be something along the lines: have a pub-key for server A stored on server B and vice versa. And then ensure, that only authenticated calls can be made to server B.
A full-fledged answer would be nice, but actually I am already happy to get a pointer into the right direction about what I should be googling for or maybe another question on SO.
The technical word you're looking for is "authentication" which is the process of establishing that a party is who they say they are. Closely related is "authorization" which is the process of determining if a party is permitted to do something.
For this problem, standard authentication techniques apply, very similar to how they would for a user. Generate a long, random authentication token and store it on server B. Connect from server B to server A using HTTPS, and pass the authentication token. A popular way to do this is with Bearer Authentication. Add the header:
Authorization: Bearer {access_token}
On server A, verify the token and execute the request if authorized.
This is just a specific way of implementing an API key. You could just as well put the access token (API key) directly into the request. It would be identical.
Typically these keys are stored in environment variables if you have a containerized or virtualized setup. I don't recommend storing them directly in the code (there are too many small mistakes that can lead to the code being visible).
There are more complex ways to do the same thing, and if you already have a good system for authenticating users, you can also just create a "service account" for server B and use your normal authentication scheme. But bearer tokens are a nice, simple way to handle machine-machine authentication without creating fake accounts.
Note that all approaches require that you be able to secure server B. Anyone who can access server B will be able to steal the token and impersonate the server. Mitigating this generally requires specialized hardware (such as an HSM), and is beyond the needs of most systems.
Though a bit more complex, another common technique is signing your requests. The advantage of signing is that you never have to send your authentication token over the network (even using HTTPS), and you only have to store the private key in one place (server B). This means that not even server A can masquerade as server B like it can with bearer authentication. I've often used JWS (JSON Web Signature) for this, but it is fairly complicated if you don't have a good library that handles it for you. Signing without JWS is tricky to get right, since you have to be very careful to sign everything that matters, and it's easy to overlook parts of the requests (headers for example). If you go down that road, you'll probably want a security expert to look it over.
Moving up from that, as Mikah notes, is OAuth2, but I generally don't recommend that for point-to-point authentication where you control everything and there aren't many entities. It's just a lot of complexity for little benefit. The point of OAuth2 is allowing centralized management of complex authentication environments. If you don't have those problems, complexity tends to reduce security, not improve it.

How to protect a rest api for android app [duplicate]

Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary? This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way. Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
I'm looking for something as fail proof as possible. The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
I'm open to any effective solution, no matter how complicated. Tin foil hat solutions are greatly appreciated.
Any credentials that are stored in the app can be exposed by the user. In the case of Android, they can completely decompile your app and easily retrieve them.
If the connection to the server does not utilize SSL, they can be easily sniffed off the network.
Seriously, anybody who wants the credentials will get them, so don't worry about concealing them. In essence, you have a public API.
There are some pitfalls and it takes extra time to manage a public API.
Many public APIs still track by IP address and implement tarpits to simply slow down requests from any IP address that seems to be abusing the system. This way, legitimate users from the same IP address can still carry on, albeit slower.
You have to be willing to shut off an IP address or IP address range despite the fact that you may be blocking innocent and upstanding users at the same time as the abusers. If your application is free, it may give you more freedom since there is no expected level of service and no contract, but you may want to guard yourself with a legal agreement.
In general, if your service is popular enough that someone wants to attack it, that's usually a good sign, so don't worry about it too much early on, but do stay ahead of it. You don't want the reason for your app's failure to be because users got tired of waiting on a slow server.
Your other option is to have the users register, so you can block by credentials rather than IP address when you spot abuse.
Yes, It's public
This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
From the moment its on the stores it's public, therefore anything sensitive on the app binary must be considered as potentially compromised.
The Difference Between WHO and WHAT is Accessing the API Server
Before I dive into your problem I would like to first clear a misconception about who and what is accessing an API server. I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
So if you are not using user authentication in the app, then you are left with trying to attest what is doing the request.
Mobile Apps should be as much dumb as possible
The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
It sounds to me that you are saying that you have algorithms running on the phone to process data from the device sensors and then send them to the API server. If so then you should reconsider this approach and instead just collect the sensor values and send them to the API server and have it running the algorithm.
As I said anything inside your app binary is public, because as yourself said, it can be reverse engineered:
should be implied that someone will have access to its binary and try to reverse engineer it.
Keeping the algorithms in the backend will allow you to not reveal your business logic, and at same time you may reject requests with sensor readings that do not make sense(if is possible to do). This also brings you the benefit of not having to release a new version of the app each time you tweak the algorithm or fix a bug in it.
Runtime attacks
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way.
Anything you do at runtime to protect the request you are about to send to your API can be reverse engineered with tools like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Your Suggested Solutions
Security is all about layers of defense, thus you should add as many as you can afford and required by law(e.g GDPR in Europe), therefore any of your purposed solutions are one more layer the attacker needs to bypass, and depending on is skill-set and time is willing to spent on your mobile app it may prevent them to go any further, but in the end all of them can be bypassed.
Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
Even when you use key pairs stored in the hardware trusted execution environment, all an attacker needs to do is to use an instrumentation framework to hook in the function of your code that uses the keys in order to extract or manipulate the parameters and return values of the function.
Android Hardware-backed Keystore
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
While it can be defeated I still recommend you to use it, because not all hackers have the skill set or are willing to spend the time on it, and I would recommend you to read this series of articles about Mobile API Security Techniques to learn about some complementary/similar techniques to the ones you described. This articles will teach you how API Keys, User Access Tokens, HMAC and TLS Pinning can be used to protect the API and how they can be bypassed.
Possible Better Solutions
Nowadays I see developers using Android SafetyNet to attest what is doing the request to the API server, but they fail to understand it's not intended to attest that the mobile app is what is doing the request, instead it's intended to attest the integrity of the device, and I go in more detail on my answer to the question Android equivalent of ios devicecheck. So should I use it? Yes you should, because it is one more layer of defense, that in this case tells you that your mobile app is not installed in a rooted device, unless SafetyNet has been bypassed.
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
You can allow the API server to have an high degree of confidence that is indeed accepting requests only from your genuine app binary by implementing the Mobile App Attestation concept, and I describe it in more detail on this answer I gave to the question How to secure an API REST for mobile app?, specially the sections Securing the API Server and A Possible Better Solution.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
No. You're publishing a service with a public interface and your app will presumably only communicate via this REST API. Anything that your app can send, anyone else can send also. This means that the only way to secure access would be to authenticate in some way, i.e. keep a secret. However, you are also publishing your apps. This means that any secret in your app is essentially being given out also. You can't have it both ways; you can't expect to both give out your secret and keep it secret.
Though this is an old post, I thought I should share the updates from Google in this regard.
You can actually ensure that your Android application is calling the API using the SafetyNet mobile attestation APIs. This adds a little overhead on the network calls and prevents your application from running in a rooted device.
I found nothing similar like SafetyNet for iOS. Hence in my case, I checked the device configuration first in my login API and took different measures for Android and iOS. In case of iOS, I decided to keep a shared secret key between the server and the application. As the iOS applications are a little bit difficult to reversed engineered, I think this extra key checking adds some protection.
Of course, in both cases, you need to communicate over HTTPS.
As the other answers and comments imply, you cant truly restrict API access to only your app but you can take different measures to reduce the attempts. I believe the best solution is to make requests to your API (from native code of course) with a custom header like "App-Version-Key" (this key will be decided at compile time) and make your server check for this key to decide if it should accept or reject. Also when using this method you SHOULD use HTTPS/SSL as this will reduce the risk of people seeing your key by viewing the request on the network.
Regarding Cordova/Phonegap apps, I will be creating a plugin to do the above mentioned method. I will update this comment when its complete.
there is nothing much you can do. cause when you let some one in they can call your APIs. the most you can do is as below:
since you want only and only your application (with a specific package name and signature) calls your APIs, you can get the signature key of your apk pragmatically and send is to sever in every API call and if thats ok you response to the request. (or you can have a token API that your app calls it every beginning of the app and then use that token for other APIs - though token must be invalidated after some hours of not working with)
then you need to proguard your code so no one sees what you are sending and how you encrypt them. if you do a good encrypt decompiling will be so hard to do.
even signature of apk can be mocked in some hard ways but its the best you can do.
Someone have looked at Firebase App Check ?
https://firebase.google.com/docs/app-check
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
I'm not sure if there is an absolute solution.
But, you can reduce unwanted requests.
Use an App Check:
The "Firebase App Check" can be used cross-platform (https://firebase.google.com/docs/app-check) - credit to #Xande-Rasta-Moura
iOS: https://developer.apple.com/documentation/devicecheck
Android: https://android-developers.googleblog.com/2013/01/verifying-back-end-calls-from-android.html
Use BasicAuth (for API requests)
Allow a user-agent header for mobile devices only (for API requests)
Use a robots.txt file to reduce bots
User-agent: *
Disallow: /

How to uniquely identify a client in a web (PHP) application

We've been developing a web application (PHP, using the Yii PHP framework) that is going to be used for data entry. The clients will be users from both the LAN and WAN (many of the remote clients will be behind a proxy, reaching our network using one IP address with NAT). What we basically want is to guarantee the validity of data in the way that no malicious user alters it.
Is there a way to programmatically identify each client in a unique way, so that I can guarantee (at least at some good percent) that no malicious remote user will connect? We were thinking of gathering the MAC addresses of all remote users and using a (non-web) client that generates a hash string that the user will input in the web application and then proceed if this authentication scheme passes. As I said, using other non-web applications for the remote client is an option.
Is such a solution as the one I describe above viable? Should we see other solutions, like maybe a VPN?
A VPN is a typical solution to the problem of locking out everyone except those you've explicitly given access --- basically you're rejecting all connections to the site that aren't authenticated in your local network or vpn. That way you dont have to write any funky logic for your actual web application.
I think this is an ideal solution because it allows the application to be maintainable in the future when other developers step in... furthermore it will require less of your developers and will ultimately keep costs down.
Normal user authentication is generally OK, but if you have higher security needs you can provide clients X.509 certificates to install in their browser. VPN is of course an option but you just move authentication problem from website to network vpn.
What you are looking for are the SSH-Key pairs:
https://help.ubuntu.com/community/SSH/OpenSSH/Keys
There are much more ressources on this, the theory in brief:
Each client creates a pair of unique keys, a private and public one. The public goes onto your server, the private stays with him. Then the client uses the key to authenticate. The server calculates a valid public key from it and checks if you have such a key in your system. If a pair is found - authentication was successful. (I never used this so far for Web authentification)
Additionally you could use OTP (One Time Password) technology. Since it is normally bound on per-account basis it will be very secure:
https://github.com/lelag/otphp

Is storing API key in iPhone app insecure?

I'm creating an iPhone app which needs to connect to a PHP-based website. The iPhone app will retrieve and add records. I'm guessing the communication between the website and the iPhone app should be controlled by an API key. iPhone app provides it and website checks for it.
I'm guessing I'd have to store that API key in the iPhone application itself, right? So my question; is storing the API key in the iPhone application risky? Can't someone somehow gain access to my API key and impersonate the iPhone app thereby gaining access to the website? or is this pretty difficult to do? If I'm thinking about it the wrong way, please tell me if there are better ways.
Whether you need to keep it really secret or not, you will have to encrypt and obfuscate it anyway, just to protect yourself from casual hackers.
On the other hand, I don't believe you can stop a determined hacker. A combination of jailbreak, gdb and a traffic sniffer will defeat nearly any protection you can think of. Investing heavily in such protection rarely makes sense, so you will have to find a compromise between wasting a lot of time and effort and having your API key hackable.
Personally, I like the idea of having the API key in an obfuscated form inside the application binary because the binary you get from App Store is encrypted. A little ptrace() hackery with PT_DENY_ATTACH can further complicate (but will never prevent completely) getting to your app through gdb. Chances are, this will be enough to avoid having your app floating all around the Internet in torrents in decrypted form. Then you will have to use HTTPS just because sniffing traffic is ridiculously easy and doesn't even require jailbreaking.
One more important consideration. If HTTPS is out of the question and you have to send the API key in HTTP requests, forget about all of the above. It doesn't make sense protecting the key in the application bundle if it's sent in plain text over network.
You could use Keychain Services to store the key like Mac stores passwords - not 100% sure but I think it also encrypts passwords and keeps them in a safe sandbox from other prying hands ( of course the sandboxing is meaningless on the jailbroken iPhone with the right tools like Costique mentioned)
Either way worth looking in to:
http://developer.apple.com/library/ios/#documentation/Security/Conceptual/keychainServConcepts/iPhoneTasks/iPhoneTasks.html
But definitely use HTTPS otherwise anyone can sniff it without much effort.

Encrypt request from iPhone to web app?

We have the following:
iPhone native app, with login form that posts to:
A php script on remote web server which checks against MySQL user table.
For security, would it be best practice to use some two-way encryption to encrypt every request? including this initial login? otherwise the user and pass will simple be passed to the web app in the clear?
I suppose https would take care of it automatically...
It would be very wise to use SSL or TLS (the protocols that HTTPS uses) to communicate with the server. You could likely get this set up rather easily on a *nix or Windows server using OpenSSL. If you're on a shared host, they likely have an option to purchase an SSL certificate that's valid for a given period of time. This is a fairly trivial process and usually requires about a week (average) with most hosts to get set up.
It should also be noted that while it is never a bad idea to encrypt the login process, it will not make your system more secure "over all" if you have a login from the web that is not secured. For instance, if you secure communication with mobile devices, but not with desktops or laptops, your security may be for nigh. The security of your application is only as strong as its weakest link, so securing your entire application (for all platforms) is very important.
Also, keep in mind that a user's login credentials are only as valuable as the data or resources that they protect: if you encrypt the login information, it is also a good idea to encrypt the rest of the application as well. Wireless sniffing technology could easily steal session data, private user information, or other sensitive data. Securing the entire user session--rather than just the login procedure--is in your users' best interest.
Hope this helps!
Using https is probably the way to go. It's what it was designed for.

Categories