We are developing a very simple first stage GUI for a company database.
At the moment our time to deliver is rather limited.
So we thought about using a simple SQL stored procedure and retrieve all data.
The data the users are allowed to see is depending on security levels defined in the database and also in our Active Directory.
So after fetching all the data, the GUI displays only what the user has access to view / edit.
My question is if there are any remarkable security issues with this aproach? It should also be noted that both the webinterface and the database are located in our intranet.
Our backend uses W2K3, IIS, PHP 5, SQL 2005
Any feedback would be greatly appreciated
Jonas
Considering the time to deliver (about 1month), it should be rather ok.
First thing: since it is in intranet only, your site should be rather secured since outside world cannot be accessing your site.
secondly, XSS and cross site request forgery should be disabled no matter what.
next, SQL injection.
with these few things in mind, the application should be basically secured.
Don't put an outward facing web server on your internal network. Seriously. Put it in a DMZ.
As far as your data is concerned, will you be filtering based on user access before or after the data hits the web front end? I'd suggest doing it in the proc.
Also, if you can, I'd suggest putting your DB on a separate box as well, for added security.
It is a sound enough approach. This way the data the user is not allowed to see remains in the database.
"So after fetching all the data, the GUI displays only what the user has access to view / edit."
A frequent mistake when dealing with access control on websites is implementing them for the data fetching scenario but not the data writing scenario. This is often the result of the assumption "the user will only send us editing requests on resources that we told her she could edit". Unfortunately...
As I coudln't spot this in your question's content, I'd just recommend making sure you effectively dealt with access control when building the GUI but also when receiving data modification requests.
If we consider the following scenario:
The user fetches data she has legitimate access to.
The user requests edition of that said data. Let's imagine an edition form is now displayed.
The user submits the form with the changes.
Before leaving her machine, the user intercepts the HTTP request and replaces the identifier of the edited resource by another identifier, to which she shouldn't have access.
Does your model ensure that when receiving the editing request, the access control rules are also applied? From a SQL-like scenario, this would translate to asking whether you're using a request template such as the first one below or the second one below:
1) "UPDATE ... WHERE ID = x"
2) "UPDATE ... WHERE ID = x AND (SELECT ... FROM ... WHERE userID = y)"
If your model is more likely to be the first, then you might have an authorization model issue. Else, it should be okay.
Hope it helps.
sb.
Related
I'm new to Android programming and I'm trying to create an app which needs a persistent remote database. Now, coming from Java and local databases, I've always connected application and database directly, without an intermediary.
I'm not seeing the point of this workaround, can someone please make this clear? I've tried searching on Google, but it seems everybody assumes this as a principles (or maybe I need to look for better keywords).
The most important argument that I can think of right now is SECURITY/QUERY VERIFICATION.
You most likely want to use an online database (perhaps MySQL) because you want to store shared information between ALL users of your application in it. The major difference between a local and an online database is that many many users have access to it - both writing and reading access.
So imagine you have your android application and now want to save some user generated data from it in your online database. Assume there is no PHP intermediary: The app directly sends the finished MySQL request to the database.
But what happens if someone looks into the source code of your app or uses any other way to manipulate that request? Let's say he changes a query from
SELECT * FROM user WHERE ID=9434896
to
SELECT * FROM user
Exactly - he gets all information from your user data table, including sensitive data such as passwords or E-Mail Addresses.
What evaluates these queries and prevents them from happening?
Your app surely doesn't, because the user can easily manipulate/change the app.Your MySQL database doesn't check them either, because it always assumes that the query is what the developer actually wanted. As long as the syntax is correct, it will execute it.
And that's what you need the PHP intermediary for:
You send values to a PHP file (e.g. check_login.php receives the values 267432(userid) and hie8774h7dch37 (password)), the PHP file then checks if these values are actually a userid (e.g. "Are they numeric values only?") and then builds a MySQL query out of it.
This way the user has no way to manipulate the query as he wishes. (He can still send wrong values; but depending on the situation it is also possible for a PHP script to check if the values are legit or not)
Perhaps this will give you some context. I built a game on Android and iPhone, and I wanted high scores stored in a remote database.
Security is the main reason you would do this. You should always do data validation on the server side, not client side. By doing it this way, my php script can validate input before making changes on the database. In addition, it is not safe to store database credentials in your apk file. This opens up a range of security vulnerabilities. Safer to keep this on the server side.
Secondly, by utilizing a single PHP script, I only need to debug/manage code that validates data and interacts with my database in 1 place... the php file. This saves me plenty of time rather than updating all of the queries and validating criteria in both the iPhone and Android instances.
I am sure there are other benefits to this approach, but these are the reasons why I do it this way.
It's an abstraction layer. You don't want to code your app to MySQL and then discover your backend is moving to MS-SQL. Also, you control how you present information to the user. If they have access, they can read everything. If you have an abstraction layer, then they can only get information by going through the proper channels.
I've built a MVC 'framework' for learning purposes, and I'm struggling with this problem:
I am working on a CRUD application and I don't know how I should delete the records from my database. Right now I'am doing it through URL.
example.com/controller/delete/id that is how I delete a record from the database. I don't really like this way, because anyone could unintentionally or intentionally delete database records.
So my question is: How should I implement this feature?
You've got a number of issues here:
First of all, you need to know who is performing the operation, then you need to decide if they're allowed to do it.
For the first, you need a login system which issues a session id to the client (usually via cookie). You then use the session id on the server to look up who the user is and check if they're allowed to do the delete. This is usually handled through granting roles to users and then allowing roles to perform certain actions
Incidentally, GET requests are used for requests that do not modify the server state and can be repeated with no side-effects. POST, (or PUT/DELETE) should be used for any action that makes changes. Browsers will not send a POST twice without prompting the user explicitly.
You need to send data with POST data.
you can also use GET with CSRF token
I think both way is good.
You need to include a security layer to your MVC in order to define who can access specific resources of your API.
The most simple way is to require a key parameter in the URL that needs to match a key that you would have predefined on the server side, but be aware that despite it will prevent random user to update your data, it might not be suitable depending on the security level you want to achieve for your application.
I am working on a system which displays the live status/stages of the system creation.
Example -: If I fill a hosting form then on my form it should display the status of the system. Like domain created, files hosted , etc in a progress bar. I want to achieve this without using data base.
Note: All the operations will be performed on a different system and my hosting form is on a different system.
Hurdles: Multiple forms can be filled at the same time.
What I have tried.
Writing steps to database and read from there.
Do curl post to a specific function. But in that case I have to use DB.
I am looking for a way where there is no db interaction required and I can see the status dynamically after filling the form.
There is 1 solutions for this:
Send and receive information using AJAX from and to server wich has installed software with API
I think this is what you want to do.
Feel free to correct my understanding of the issue, but here's how I see this at the moment.
You have a web site that has multi stage forms. So user fills the first one, then sends it, and gets the next one to be filled.
You also have a web server, probably running PHP, that handles user interaction. So whenever user fills a form, your server application proceeds with that and gives the user the next one.
Furthermore, there are multiple external servers and services that your PHP application gives orders to based on the information given by the user.
You will want to show process information from external services whenever things do proceed.
Finally, you don't want to use a oh so heavy database solution if a lighter one exists.
If I have gotten the facts about right so far, there may be a suitable solution to help you out.
To begin with, it's worth mentioning that PHP has its own session mechanism. Its data storage defaults to flat files, which may or may not be suitable for your use. Yet it requires almost no configuration or setup and offers a persistent storage, so it's by far the easiest option, in my opinion.
Note, that if the amount of information to be stored is very small, you can bypass the application data storage altogether and stick to the cookies. Read on form submit, update during the PHP process and send update the cookie accordingly as part of the response. You can encrypt the data in order to make it harder to alter by the user.
Lastly, there's this option called cache. There are multiple technologies for this when working on PHP. For instance: xcache and APC. These store information in RAM, which obviously has its downsize, since data can basically vanish at any given time - you can control this, though.
No matter the choice of data storage, the general idea is as follows:
When user first interacts with your service, create a session identifier and an approriate cookie to identify the user later on.
When user has filled the first form and sends it, read the information and either store it in the cache or in the cookie. When storing and reading information from and to the cache, either prefix or namespace it using the session identifier used by user. This way there can be multiple users using the service at any given time! When done, send the second form to be filled.
When user eventually sends the second form, read from the cache or from the cookie the information given to the first one. Now, should the information be missing, there has been an error in the filling process (or cache has been invalidated due to long time period or cookie expiration time - you will want to take these things into account, too).
So long things are going nicely, build up your information gathered from the forms. Whenever you have enough information to do so, make a request to the external service to really make things happen.
Now, lastly. You can do periodical ajax requests from the client. Therefore you get not only the forms sent, but also occasional "how is the process going?" queries. Now, whenever you receive a request like that from the browser, you can identify the user by session identifier and make a call from your PHP application to your external service, asking for a status of any kind. You then simply forward the information to the browser that has been waiting the answer all this time.
Note that you may have to store service spesific information in your cache to do this.
This setup, however, effectively gives you the ability to control data flow in your PHP application without revealing the services behind it. It's also lightweight enough to develop as it requires no additional external software for short term data storage.
I have a theoretical question about how to approach a current project. It is a fairly simple matching quiz using JS + PHP. I am simply taking care of business logic on the server (answer checking, score updating) such as to roughly follow MVC conventions.
My current setup:
HTML + JS page to allow the user to drag and drop answers onto questions. On a successful drop, the question + answer combo is sent to the following:
A server-side PHP page to check the answer correctness based on an XML file. I return a few pieces of data in some XML client, such as true/false and number of attempts at a certain question. In addition, if the answer is correct, I increment a Session Variable on the server to keep track of the user's score.
My question revolves around best practices for setting the above mentioned session variable for tracking the score. I understand that a more persistent setup is most likely preferable, in case of computer shut-off, accidental browser closing, etc...but strictly based on this setup -
Is this a secure method for storing a score for a final insertion into the database?
I eventually will have to pull the score down from the server at the end of the game (or even mid game, for that matter), as well. Should I create a simple 'getter' PHP page to pull the score down, and just access the session variable and send it to the client?
Currently, the user actually has access to the php server-side page becuase it resides in the same folder as the actual quiz. This is moooost likely a no-no - but what is the common practice for hiding this server-only file from the user's prying eyes (without having to use authentication)?
Is this a secure method for storing a score for a final insertion into the database?
It is secure. But I don't see a why you wouldn't update the scores in the database as they are changed. This way it will be persistent.
I eventually will have to pull the score down from the server at the end of the game (or even mid game, for that matter), as well. Should I create a simple 'getter' PHP page to pull the score down, and just access the session variable and send it to the client?
Sounds like a plan.
Currently, the user actually has access to the php server-side page becuase it resides in the same folder as the actual quiz. This is moooost likely a no-no - but what is the common practice for hiding this server-only file from the user's prying eyes (without having to use authentication)?
As long as the files are .php files and are parsed by the webserver the user can only do requests to the files and that's it (if I understood the question correct that is).
check out this. I hope this will help you
http://www.linuxforu.com/2009/01/server-side-sessions/
[I hope that this question is not too broad, I think that the subject is very interesting but I incourage you to tell me if it's off-policy.]
My scenario is this:
I have a LAMP website who stores also sensitive data and documents
Only registered users are allowed to operate on the site, and only on certain data and documents. Users are stored in $_SESSION variables
Most of the pages implement a sort of rudimental permission control, but some important DB operations are called via AJAX
AJAX security is implemented very poorly, as anyone that is that smart can tamper with the request sending whatever id they like and delete records with brutal simplicity
Asking for a complete book on security is obviously a bit too much (and I'm already reading and trying a lot on the subject), let's say that my main concern is if AJAX pages should be treated with special regards, as I need to secure the whole software to prevent hacks and other problems.
let's say that my main concern is if AJAX pages should be treated with special regards
Not really. They should be treated almost exactly the same as any other request. All HTTP requests come from outside your system and are under the control of the client (so can consist of, more or less, anything the user can imagine).
You might be returning JSON, you might be returning a complete HTML document, you might be returning XML — but the format doesn't matter, the data does.
If the request is for sensitive data, then you need (on the server) to authenticate the user and then make sure they are authorised to view / edit that data.
The only difference is how you present a "You are not authorised" message. You can't simply return an HTML document with a login form when you expect the browser to load data into XHR. The response needs to be appropriately formatted and the JavaScript needs to be able to handle it.
I have a LAMP website who stores also sensitive data and documents
You should store as little sensitive data as possible. Especially when you are not sure how to keep this information secure/private. Use OpenID or something for your authentication for example. I really like LightOpenID for it's simplicity. I created a little example project/library to see lightopenId in use. It simplifies using OpenID by using openID-selector. When you use OpenID you also use secure OpenID providers the passwords are also not transmitted over the wire in plain-text but protected by https/SSL.
Only registered users are allowed to operate on the site, and only on
certain data and documents. Users are stored in $_SESSION variables
Yup that's what sessions are for.
Most of the pages implement a sort of rudimental permission control,
but some important DB operations are called via AJAX
You should read up on OWASP top 10. at least. (Don’t stop at 10.)
AJAX security is implemented very poorly, as anyone that is that smart
can tamper with the request sending whatever id they like and delete
records with brutal simplicity
See previous section. Read up on OWASP top 10 section at least. Somethings which a lot of people overlook for example are CSRF for example.