Is it secure to send a php value in a link - php

I have a dynamic page where it should take data from a db. So the approach I thought of was to create the dynamic page with this php code at the top
<?php $pid = $_GET["pid"]; ?>
Then later in the file it connects to the database and shows the correct content according to the page ID ($pid). So on the home page, I want to add the links to display the correct pages. For example, the data for the "Advertise" page is saved in the database in the row where the pid is 100. So I added the link to the "Advertise" page on the homepage like this:
Advertise</li>
So my question is, anyone can see the value that's send on the link and play around by changing the pid. Is there an easy way to mask this value, or a safer method to send the value to the page.php?

The general concept you're looking for is Access Control. You have a resource (in this case, a page and its content), and you want to control who can access it (users, groups, etc), and probably how they can access it as well (for example, read-only, read-and-write, write-but-only-on-the-first-Monday-of-the-month, etc).
Defining the problem
The first thing you need to decide is which resources you need access control for, and which you don't. It sounds to me like some of these pages are supposed to be "public access" (thus they are listed on some kind of index page), while others are supposed to be restricted in some way.
Secondly, you need to come up with an access policy - this can be informally described for a small project, but larger projects usually have some structured system for defining this policy. For each resource, your policy should answer questions like:
Do you have some kind of user account system, and you only want account holders (or certain types of account holders) to access it? Or, are you going to send links to email addresses, and want to limit access to just those people who have the link?
What kind of access should each user have? Read-only? Should they be able to change the content as well (if your system supports that)?
Are there any other types of restrictions on a users' access? Group membership? Do they need to pay before they get access? Are they only allowed access at specific times?
Implementing your policy
Once you've answered these questions, you can start to think about implementation. As it stands, I think you are mixing up access control with identification. Your pid identifies a page (page 100, for example), but it doesn't do anything to limit access. If your pages are identified with a predictable numbering scheme, anyone can easily modify the number in the request (this is true for both GET requests, such as when you type a URL into an address bar, and POST requests, such as when you submit a form).
To securely control access there needs to be a key, usually a string that is very difficult to guess, which is required before access is granted. In very simple systems, it is perfectly fine for this key to be directly inserted in the URL, provided you can still keep the key secret from unauthorized users. This is exactly how Google Drive's "get a link to share" feature works. More complex systems will use either a server-side session or an API key to control access - but in the end, it's still a secret, difficult-to-guess string that the client (user or user's browser) sends to the server along with their request for the resource.
You can think of identification like your street address, which uniquely identifies your house but is not, and is not meant to be, secret. Access control is the key to your house. Only you and the people you've given a key to can actually get inside your house. If your lock is high quality, it will be difficult to pick the lock.
Bringing it together
Writing code is easy, designing software is hard. Before you can determine the solution best for you, you need to think ahead about the ramifications of what you decide. For example, do you anticipate needing to "change the keys" to these pages in the future? If so, you'll have to give your authorized users (the ones that are still supposed to have access) the new key when that happens. A user-account system decouples page access control from page identification, so you can remove one user's access without affecting everyone else.
On the other hand, you also need to think about the nature of your audience. Maybe your users don't want to have to make accounts? This is something that is going to be very specific to your audience.
I get the sense that you're still fairly new to web development, and that you're learning on your own. The hardest part of learning on one's own is "learning what to learn" - Stack Overflow is too specific, and textbooks are too general. So, I'm going to leave you with a short glossary of concepts that seem most relevant to your current problem:
Access control. This is the name of the general problem that you're trying to solve with this question.
Secrecy vs obscurity. When it comes to security, secrecy == good, obscurity == bad.
Web content management system. You've probably heard of Wordpress, but there are tons of others. I'm not sure what your system is supposed to do, but a content management system might solve these problems for you.
Reinventing the wheel. Good in the classroom, bad in the real world.
How does HTTP work. Short but to the point. A lot of questions I see on SO stem from a fundamental misunderstanding of how websites actually work. A website isn't so much a single piece of software, as a conversation between two players - the client (e.g. the user and their browser), and the server. The client can only say something to the server via a request, and the server can only say something to the client via a response. Usually, this conversation consists of the client asking for some resource (an HTML web page, a Javascript file, etc), to which the server responds. The server can either say "here you go, I got it for you", or respond with some kind of error ("I can't find it", "you're not allowed to see that", "I'm too busy right now", "I'm not working properly right now", etc).
PHP The Right Way. Something I wish I had found when I first started learning web development and PHP, not seven years later ;-)

It is always safer to $_POST when you can, but if you have to use something in the query string, it is safer to use a hash or GUID rather than something that is so obviously an auto-incremental value. It makes it harder to guess what the IDs would be. There are other ways values can be past between pages ($_SESSIONs, cookies etc), but it is really about what you want to achieve.

Sending it to php is not an issue, should be fine.
What php does with it afterwards... that's how you secure.
First thing I'd do is make sure it's an integer.
$pid=(is_int($_GET['pid']))? $_GET['pid'] : 1; //1 is the default pid, change this to whatever you want.
Now that you know you're dealing with an integer, use $pid after that and you should be good to go.

Related

Share login between PHP and ASP Classic (VBScript)

I'm trying to update/maintain an older web site that was initially written in Classic ASP/VBScript, and later had PHP pages added. I'd like to set it up so that PHP handles the login, but then that logged in state can be shared between PHP and ASP/VBScript. Note that the pages and languages are fairly intermingled -- somebody spending time on the site might come across several different pages in each language, in no particular order.
(Eventually I expect it to be completely rewritten in PHP, but I have to eat this elephant one bite at a time; and for now I'm simple trying to improve security.)
Let's assume I've successfully logged in and validated the user in PHP using something like phpPass. How do I tell the ASP/VBScript page they just pulled up that they're logged in? How can I best do this securely?
(And thank you for any help!)
You cannot share sessions across Classic ASP/VBScript and PHP as they create/use them differently. My solution isn't that secure but would work:
Log the user in via 1 of the languages (say PHP)
Pass the initial session variable to a URL and get ASP to look at the querystring and then create another session for ASP there.
That would deal with it...although not that secure!
The best answer I've been able to find for this issue was the following. Specific to sharing a login between Classic ASP and ASP.net, but the methodology is exactly the same:
As you probably already know, classic asp and asp.net cannot share the
same session state, so you do need to have a mechanism to log from one
into the other.
What I would do is: when someone logs in, create a unique GUID that
you save in the database for that user. When you jump from one site to
the other, pass that GUID into the query string. When you try to
auto-log them into the other site, look up that GUID and see if it's
attached to anyone. If it is, log them in.
This way you aren't passing anything that a user could guess or
decrypt.
Additionally, it's smart to add a timestamp to the database as well; and the GUID should only be valid for a second or two. Log in on the PHP end, then flip over to ASP and check the GUID.
Not totally secure, but appears to be about as secure as I'm going to find.
source: https://stackoverflow.com/a/921575/339440
Edit to add: per comments, also record the user's IP address to the database and compare it on the ASP side. No teleporting allowed!
CORRECTION: In this case "GUID" is a misnomer. What you need here is a random string of characters, not a GUID. A GUID is a semi-random construct with one of a handful of specific formats, and is not applicable here.

PHP CodeIgniter Authorization of Features via URI

I am designing a web application that is heavy reliant on database tables/records and have already designed the login system. As it stands, the login system creates an element in the session to verify that the user is logged on. This works fine.
However, as I've been coding my application--I have found a constant need to check that my users are authorized to perform certain actions.
For example--I have a feature which allows users to edit their profile at www.mywebsite/account/edit/1 -> 1 being the Id. In terms of future scalability, is it practical to perform a database query to check that the current logged in user has access to edit their information after arriving at that URL?
My concern, of course, is that someone would just put in a random Id to edit another account.
I have also thought about creating a form between every transition to post this data, yet that comes with a load of limitations itself.
I was wondering if anyone had hit the same problems and found an overall solution to this problem?
This is a concern that everyone addresses at some point or another. The way I see it, you're really asking a couple of questions:
How do I make sure a user is authorized to access something? and
Is checking the database every single time really the best way to do it?
With respect to the first question: the approach you're taking is probably the only realistic one. It boils down to this: whenever a user needs to do something, your application needs to check something to see if they're allowed to do it. What is that something? It's called an Access Control List (ACL).
You could hard code the ACL in your application, but that's a really bad idea. So that means you have to store the details of an ACL somewhere. And when we start talking about storing something in our applications, the obvious answer is (almost) always in the database.
Which leads to the second question... a quick check of the database to see if a user has access is generally not going to be a huge bottleneck, provided your database design is sensible. You're going to be doing something like SELECT key FROM acl WHERE key='something' AND user_id='current user ID'; and checking to make sure you get at least one result. It's going to add a little overhead to your application, but what's the alternative? Some sort of hard coded ACL? Loading the full ACL for your application and searching it for the key and user ID in your PHP code?
If you're really concerned about the overhead involved with your ACL stored in MySQL, you could look at some of the other databases like MongoDB or CouchDB which should be faster for simple key/value pair lookups (note that I've looked at both MongoDB & CouchDB, but not used either in applications), but I think you'll find that, for most applications, doing it in MySQL should work just fine.

What are the best security guidelines in this AJAX scenario, with special regards to authentication?

[I hope that this question is not too broad, I think that the subject is very interesting but I incourage you to tell me if it's off-policy.]
My scenario is this:
I have a LAMP website who stores also sensitive data and documents
Only registered users are allowed to operate on the site, and only on certain data and documents. Users are stored in $_SESSION variables
Most of the pages implement a sort of rudimental permission control, but some important DB operations are called via AJAX
AJAX security is implemented very poorly, as anyone that is that smart can tamper with the request sending whatever id they like and delete records with brutal simplicity
Asking for a complete book on security is obviously a bit too much (and I'm already reading and trying a lot on the subject), let's say that my main concern is if AJAX pages should be treated with special regards, as I need to secure the whole software to prevent hacks and other problems.
let's say that my main concern is if AJAX pages should be treated with special regards
Not really. They should be treated almost exactly the same as any other request. All HTTP requests come from outside your system and are under the control of the client (so can consist of, more or less, anything the user can imagine).
You might be returning JSON, you might be returning a complete HTML document, you might be returning XML — but the format doesn't matter, the data does.
If the request is for sensitive data, then you need (on the server) to authenticate the user and then make sure they are authorised to view / edit that data.
The only difference is how you present a "You are not authorised" message. You can't simply return an HTML document with a login form when you expect the browser to load data into XHR. The response needs to be appropriately formatted and the JavaScript needs to be able to handle it.
I have a LAMP website who stores also sensitive data and documents
You should store as little sensitive data as possible. Especially when you are not sure how to keep this information secure/private. Use OpenID or something for your authentication for example. I really like LightOpenID for it's simplicity. I created a little example project/library to see lightopenId in use. It simplifies using OpenID by using openID-selector. When you use OpenID you also use secure OpenID providers the passwords are also not transmitted over the wire in plain-text but protected by https/SSL.
Only registered users are allowed to operate on the site, and only on
certain data and documents. Users are stored in $_SESSION variables
Yup that's what sessions are for.
Most of the pages implement a sort of rudimental permission control,
but some important DB operations are called via AJAX
You should read up on OWASP top 10. at least. (Don’t stop at 10.)
AJAX security is implemented very poorly, as anyone that is that smart
can tamper with the request sending whatever id they like and delete
records with brutal simplicity
See previous section. Read up on OWASP top 10 section at least. Somethings which a lot of people overlook for example are CSRF for example.

Ways other than an iframe to pass parameter from php to asp.net

I have an PHP Application. If I have logged in that application I am trying to pass the parameter as querystring through an iframe to the asp.net page.
Is there any other way to implement other than using an iframe?
Instead of having the PHP application submit data to your ASP application, it would be better if they could natively and securely share some of the data.
How?
Well, your goal is having one script tell the other that the user has been logged in, right? In PHP, this is usually done by putting something in the $_SESSION. Your ASP application can't read $_SESSION, though. You'll need to use something else.
When the user logs in, create a unique value. Maybe the result of hash_hmac over some interesting data? Whatever it is, it should be unique every time it's created and unguessable. Don't throw in things like the user's IP address or the current time.
Save the unique value to a database table that both applications can read. Also store other information that will help identify the user, such as her identifier (user_id or whatever you have on hand).
So, the PHP code that logs the user in has created this unique value and stuck it in a shared database table. Now, the PHP application should forward the user to your ASP application. Include the unique value in the request.
When the ASP application receives the request, it will look for the unique value. If it's found, it can look in the shared table. If the value is found in the table, it can then take whatever measures it needs to in order to mark the user as logged in.
Once the ASP application has logged the user in, then it should delete the unique value from the shared table. The user can be forwarded to wherever she was going in the first place.
By making the key usable only one time, and only after a successful login in the PHP application, you'll reduce the possibilities of abuse by malicious or curious users. All of the important information will be hidden in the shared database table.
Be warned that this is an overly simplistic implementation of "single sign on" and is full of caveats and edge cases. While it might work for you, it might not be the best solution. Given your question history, it looks like you've been struggling with similar issues for quite some time. You might want to give some thought into using a slightly more "industry standard" SSO mechanism. SAML is the 800 pound gorilla of SSO standards. I normally wouldn't wish it on my worst enemy, but maybe it's the thing you're really looking for here.
Also, don't use iframes, they're cookie eating disasters in some browsers.

Top techniques to avoid 'data scraping' from a website database

I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db.
Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information.
Does anyone have any good tactics for preventing or even just detering this that they could share.
While there's nothing to stop a determined person from scraping publically available content, you can do a few basic things to mitigate the client's concerns:
Rate limit by user account, IP address, user agent, etc... - this means you restrict the amount of data a particular user group can download in a certain period of time. If you detect a large amount of data being transferred, you shut down the account or IP address.
Require JavaScript - to ensure the client has some resemblance of an interactive browser, rather than a barebones spider...
RIA - make your data available through a Rich Internet Application interface. JavaScript-based grids include ExtJs, YUI, Dojo, etc. Richer environments include Flash and Silverlight as 1kevgriff mentions.
Encode data as images. This is pretty intrusive to regular users, but you could encode some of your data tables or values as images instead of text, which would defeat most text parsers, but isn't foolproof of course.
robots.txt - to deny obvious web spiders, known robot user agents.
User-agent: *
Disallow: /
Use robot metatags. This would stop conforming spiders. This will prevent Google from indexing you for instance:
<meta name="robots" content="noindex,follow,noarchive">
There are different levels of deterrence and the first option is probably the least intrusive.
If the data is published, it's visible and accessible to everyone on the Internet. This includes the people you want to see it and the people you don't.
You can't have it both ways. You can make it so that data can only be visible with an account, and people will make accounts to slurp the data. You can make it so that the data can only be visible from approved IP addresses, and people will go through the steps to acquire approval before slurping it.
Yes, you can make it hard to get, but if you want it to be convenient for typical users you need to make it convenient for malicious ones as well.
There are few ways you can do it, although none are ideal.
Present the data as an image instead of HTML. This requires extra processing on the server side, but wouldn't be hard with the graphics libs in PHP. Alternatively, you could do this just for requests over a certain size (i.e. all).
Load a page shell, then retrieve the data through an AJAX call and insert it into the DOM. Use sessions to set a hash that must be passed back with the AJAX call as verification. The hash would only be valid for a certain length of time (i.e. 10 seconds). This is really just adding an extra step someone would have to jump through to get the data, but would prevent simple page scraping.
Try using Flash or Silverlight for your frontend.
While this can't stop someone if they're really determined, it would be more difficult. If you're loading your data through services, you can always use a secure connection to prevent middleman scraping.
force a reCAPTCHA every 10 page loads for each unique IP
There is really nothing you can do. You can try to look for an automated process going through your site, but they will win in the end.
Rule of thumb: If you want to keep something to yourself, keep it off the Internet.
Take your hands away from the keyboard and ask your client the reason why he wants the data to be visible but not be able to be scraped?
He's asking for two incongruent things and maybe having a discussion as to his reasoning will yield some fruit.
It may be that he really doesn't want it publicly accessible and you need to add authentication / authorization. Or he may decide that there is value in actually opening up an API. But you won't know until you ask.
I don't know why you'd deter this. The customer's offering the data.
Presumably they create value in some unique way that's not trivially reflected in the data.
Anyway.
You can check the browser, screen resolution and IP address to see if it's likely some kind of automated scraper.
Most things like cURL and wget -- unless carefully configured -- are pretty obviously not browsers.
Using something like Adobe Flex - a Flash application front end - would fix this.
Other than that, if you want it to be easy for users to access, it's easy for users to copy.
There's no easy solution for this. If the data is available publicly, then it can be scraped. The only thing you can do is make life more difficult for the scraper by making each entry slightly unique by adding/changing the HTML without affecting the layout. This would possibly make it more difficult for someone to harvest the data using regular expressions but it's still not a real solution and I would say that anyone determined enough would find a way to deal with it.
I would suggest telling your client that this is an unachievable task and getting on with the important parts of your work.
What about creating something akin to the bulletin board's troll protection... If a scrape is detected (perhaps a certain amount of accesses per minute from one IP, or a directed crawl that looks like a sitemap crawl), you can then start to present garbage data, like changing a couple of digits of the phone number or adding silly names to name fields.
Turn this off for google IPs!
Normally to screen-scrape a decent amount one has to make hundreds, thousands (and more) requests to your server. I suggest you read this related Stack Overflow question:
How do you stop scripters from slamming your website hundreds of times a second?
Use the fact that scrapers tend to load many pages in quick succession to detect scraping behaviours. Display a CAPTCHA for every n page loads over x seconds, and/or include an exponentially growing delay for each page load that becomes quite long when say tens of pages are being loaded each minute.
This way normal users will probably never see your CAPTCHA but scrapers will quickly hit the limit that forces them to solve CAPTCHAs.
My suggestion would be that this is illegal anyways so at least you have legal recourse if someone does scrape the website. So maybe the best thing to do would just to include a link to the original site and let people scrape away. The more they scrape the more of your links will appear around the Internet building up your pagerank more and more.
People who scrape usually aren't opposed to including a link to the original site since it builds a sort of rapport with the original author.
So my advice is to ask your boss whether this could actually be the best thing possible for the website's health.

Categories