Ive recently started looking into authenticating with azure active directory using client credentials grant with public/private certificates as detailed here: https://learn.microsoft.com/en-gb/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow
Im running an external PHP/LEMP server outside of the azure hosting platform etc etc.
I've managed to get the connection to azure ad working successfully as seen below:
The question is more how we do the credential storage so i can actually do the authorisation. The ironic thing is, i need to store these credentials so i can access ones stored securely within the key vault! So im just wondering does anyone have any recommendations for storing of the:
Tennant ID
Client ID
Scope Uri (i guess i should include this to, as it's an id/guid as such)
Is it safe to store these values as plain text within a sites database?
Would you recommend environment variables? Just wondering what peoples approach is to this.
Many thanks!
I would suggest you store these variables in Key Vault, that's the most secure option. However, then obviously you need keys to access Key Vault.
If you were running your web app in Azure then this would be easy. You can use managed identity to give your web app an identity, and grant this access to KV, done. As your running outside of Azure, the next best method would be to generate a service principal that authenticates with a certificate, grant this service principal access to key vault, then store the private key on your web server and use this to auth. See here for details on creating a service principal.
Related
I have an existing third-party PHP Web Application (ELGG) that I would like to extend with a Node.js Application. Users are authenticated in the PHP app by checking their provided credentials against a MySQL database.
How can I secure access to the Node.js app without having to rewrite authentication code in Node? Is there some way to allow users to access the Node.js app only if they're logged in to the PHP app?
You could use a DB or some other shared repository to store the users session id, which Node can check to ensure the user is logged in.
I think the best way to approach it would be to have the PHP and the Node applications operate as subdomains of the same root domain, then have the Node application check for the PHP app's auth cookie. This would avoid the extra database call in Irwin's answer.
Once the user logs in to the PHP app, a Cookie with an authentication token is created for phpapp.mydomain.com (*.mydomain.com). The Node application, hosted at nodeapp.mydomain.com, can access the cookie auth token created at phpapp.mydomain.com.
In general, you would make the Node.js app a web service, make it available locally and not publicly, then write PHP code which performs auth, then calls the API provided by Node.js, then constructs a response for the user using that data.
I wrote an Elgg plugin which provide functionnality to access node.js server for websocket. You can check the code here: elgg-nodejs
I just parse the cookie to get session user:
getElggSession = function(socket) {
return socket.handshake.headers.cookie.match(/Elgg=(\S*);?/)[1];
};
Maybe it's not the best method for security...
I am looking to build an API that I can deploy on my servers to monitor system load.
It will report to a central manager server that runs a client to display the information.
The issue I am struggling with is best to secure the API.
What I want is for the client to be the only software that can access the server and retrieve this information but I am unsure how to achieve this using PHP.
I also want the possibility of distributing the API and client for others to use on their servers so I don't want people to be able to access other people data if they are using the API also.
The client is also written in PHP using MySql and has a secure login.
This sounds like you're trying to solve the wrong problem.
I also want the possibility of distributing the API and client for others to use on their servers so I don't want people to be able to access other people data if they are using the API also.
The only right answer to this is authentication. You need to protect your API by giving each user access credentials known only to them.
Your API must never reveal any data that the client isn't allowed to see as per their authentication credentials. Trying to work around this danger by trying to somehow protect the client from prying eyes is not safe - somebody who has access to the client and can observe it running will be able to reverse engineer any traffic between it and the server given enough effort.
If the API is properly secured, it won't matter to you which client tool is used to access it. The requirement to limit API access to a certain program will go away.
if you use SSL, along with authentication (i use 3rd party auth google, fb, etc), create data /reports on the fly and have the data saved in a subdirectory OUTSIDE your web folder (instead of /var/www, /var/myStorage/currentSessionId/), then you basically guarantee the security that you want.
your php will only access a subdir that is named for the session it is running under.
I'm building an App who access a MySQL database from my server, and I'm sending the data from the app and receiving the PHP response from the server. I'm trying to create a Login system for this App using this database.
What's the process? What's the best practice for build this?
You will have to store some kind of session value in your app and send it with your requests. You may be able to utilize PHP sessions to do this, but what I usually prefer to do is create database entries for API keys. On a successful login an API key is generated for that user and stored on the device. Then on each request you will pass the username/api key combination for authentication on the server side. This method will easily port over if you wish to expand your codebase into android/blackberry/toaster ovens. It makes for a very portable authentication system. Also, this keeps you from having to store the password on the device, which is a security concern.
This is how one programmer chooses to do it.
We've been developing a web application (PHP, using the Yii PHP framework) that is going to be used for data entry. The clients will be users from both the LAN and WAN (many of the remote clients will be behind a proxy, reaching our network using one IP address with NAT). What we basically want is to guarantee the validity of data in the way that no malicious user alters it.
Is there a way to programmatically identify each client in a unique way, so that I can guarantee (at least at some good percent) that no malicious remote user will connect? We were thinking of gathering the MAC addresses of all remote users and using a (non-web) client that generates a hash string that the user will input in the web application and then proceed if this authentication scheme passes. As I said, using other non-web applications for the remote client is an option.
Is such a solution as the one I describe above viable? Should we see other solutions, like maybe a VPN?
A VPN is a typical solution to the problem of locking out everyone except those you've explicitly given access --- basically you're rejecting all connections to the site that aren't authenticated in your local network or vpn. That way you dont have to write any funky logic for your actual web application.
I think this is an ideal solution because it allows the application to be maintainable in the future when other developers step in... furthermore it will require less of your developers and will ultimately keep costs down.
Normal user authentication is generally OK, but if you have higher security needs you can provide clients X.509 certificates to install in their browser. VPN is of course an option but you just move authentication problem from website to network vpn.
What you are looking for are the SSH-Key pairs:
https://help.ubuntu.com/community/SSH/OpenSSH/Keys
There are much more ressources on this, the theory in brief:
Each client creates a pair of unique keys, a private and public one. The public goes onto your server, the private stays with him. Then the client uses the key to authenticate. The server calculates a valid public key from it and checks if you have such a key in your system. If a pair is found - authentication was successful. (I never used this so far for Web authentification)
Additionally you could use OTP (One Time Password) technology. Since it is normally bound on per-account basis it will be very secure:
https://github.com/lelag/otphp
I have an existing Java EE web application running on GlassFish 3.1. Sign in works fine through the jdbcRealm configured in GlassFish 3.1.
Someone on another team is developing a separate web application in PHP, the boss doesn't want the user of the apps to have to sign in twice. That is, when they are signed in to the Java web app and they click a link that takes them to the PHP app, they should already be signed in to that app as well. (And vice-versa.)
Not sure how to implement this. I was thinking that I could generate a long random key (a token) that gets generated on log in of either app, and passed around in every web request for either app to identify a logged in user, but that doesn't seem safe.
I need pointers in the right direction.
You said
I was thinking that I could generate a long random key (a token) that
gets generated on log in of either app, and passed around in every web
request for either app to identify a logged in user, but that doesn't
seem safe.
But that's essentially how sessions work.
Your best bet is to generate a unique login identifier (as you said) store it in a database or memory cache accessible by both apps and find a way to save it such that both web apps can retrieve it.
If both apps are on same root domain, you can use a cookie with path set to / so both apps can access it.
If both apps are going to be on different root domain then it will be a little more tricky.
As for security regarding the identifier token being passed around, you could regenerate the identifier on each request, which guards against cookie jacking.