I’ve read that you should not use GET requests if you are modifying the database. How would you record analytics about your website then?
For example, I want to record page views whenever someone visits a page. I would need to update views = views + 1 in the database. Is this OK, despite using a GET request, or is there another technique? Surely, not every request should be a POST request.
The general advice about how to use POST vs. GET shows up in RFC 1945 from 23 years ago:
The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
POST is designed to allow a uniform method to cover the following functions:
Annotation of existing resources;
Posting a message to a bulletin board, newsgroup, mailing list, or similar group of articles;
Providing a block of data, such as the result of submitting a form [3], to a data-handling process;
Extending a database through an append operation.
These guidelines remain in effect to this day, but they cover the primary purpose of the user's page request.
The act of incrementing a view counter is incidental to the primary purpose of the request, which is to view the page content. Indeed, the user is likely unaware that this database update is occurring.
(Of course, you must expect that you will receive duplicate requests as users move through browser history, caches are populated, or spiders crawl your pages. This wouldn't be the case if a POST request was made.)
It's ok.
When you make POST request, you actually wait for POST params to come and you build your database insert query based on parameters which you've got from browser.
On GET request you actually implement your own business logic, so user won't ever know what is going on the side.
And for the finish, actually sometimes you can do something, what's going against rules, rules are good, but we are able not to follow them, that's what makes us human, if we would strictly follow all the rules, it would be cumbersome.
Related
I came across a situation where someone wanted me to implement Sorting, Search, record-per-page and pagination through POST request rather than GET.
I tried him to tell why POST is not feasible, Like
User will not be able to bookmark the page
Through POST we cannot maintain paging params when search returns records greater than record per page.
Sorted order will not be maintained when user navigates to next page by clicking page number.
Then he suggested me to keep search, sorting and paging values in cookies for that instance, once user moves to other page we can clear the cookies, or we keep in session
Please help me to decide is this right way of doing the things?
So, I don't want to jump into the middle of your company dispute, but I understand the situation your in and realize that sometimes you need someone on your side.
1). First, POST os NOT for getting, so by definition, he is wrong. If you are not creating anything, you simply do not POST. See here.
2). Your point about not being able to bookmark the page for access later is comletely valid.
3). No. No, no, no. Do not store that stuff in the session or cookies. While it won't harm anything, it's completly unnecessary. It isn't sensitive data, and technically speaking it could work. However, you would only need to do this if you had already broken the first point and used some verb other than GET.
If you are paginating, sorting, etc it is because you have receieved data. You cannot receive information unless you first GET it, right?
First of all you should make him understand where to use GET, and Where to use POST.
I am giving here in short, for detailed information you can ask to google.
GET: Usually used to submitted the search request or any request where user wants to pull the information from the server.
Advantage of GET .
1. page can be bookmarked.
2. page can be reloaded safely.
POST: Used for request where data may be altered or added in database. or page what you don't want someone bookmark.
Advantage of POST.
Name value pair is not shown in URL. So this is plus point for security.
Unlimited number name value pair is passed.
Basically as I mentioned POST is used for destructive action such as creation, editing or deletion.
And for pulling the data we mainly use GET.
and what is the need to put the search param in to cookies, Because as far as I think you are doing all the stuff like sorting or searching on server side, So You will have to pass that in to url every time (or POST body If you follow the path suggested by your Einstein Senior :) ), So no need to fill the cookies space
I hope this will help and he will understand.
I am using Laravel 5.2.15.
There are list of records in a webpage with Edit and Delete button with each record. I have two approaches for deleting the record
Use JQuery and send Ajax Request to server.
Place a form tag for delete button in each row.
I have following question
In case I use Approach 1, can it cause any issue when the site will be viewed from Android or iPhone? I have another option to do Server side validation using Request class.
In case of Approach 2, Will it make the page heavy? I am using Pagination, so 10 records will be displaced per page.
Please guide me if I should go with which approach or please suggest if both approaches are incorrect.
The questions you have don't really focus on the main reasons to choose one above the other. They differ mostly in how the request is sent to the server and how the page is refreshed to show the results.
Using Ajax is a very common approach and relies on using Javascript, a technology that has been available in all browsers for a very long time. Compatibility will not be a problem as most of the internet wouldn't function without it anyways (and you can even make it work using your second approach as a fallback mechanism). The request you sent is typically a HTTP DELETE request to a REST endpoint so that the server then knows to delete the record1. Upon receiving the success response from the server the page is responsible for updating itself by removing the row corresponding with the just deleted record, and possibly fetching new records to still have 10 rows on that page. No page refreshes required, but some Javascript required.
Your second approach is kind of old school in that the form you submit contains some kind of identifier such that the server knows what to do. This is a full page load and should be a HTTP POST request if you want to do it properly2. Following the Post/Redirect/Get idiom the server then sends a Redirect response so that the browser will then trigger yet another normal page load as GET request to show the user the updated list of records. You do not have to update the page manually by yourself, at the cost of having annoying page reloads (this isn't really expected anymore in the current day and age).
My advise would be to go with the first approach. It is the modern way of doing things and allows for having non-reloading pages. It does however require some additional work on the client side (in Javascript) to update the page accordingly.
As a side note, CSRF must be taken care of in both instances really. Always include a CSRF token with every 'update' action you perform on the server.
1 You have to program this yourself, of course :)
2 Browsers don't generally support anything other than GET and POST, although the HTTP specification allows for much more request methods.
It depends upon your requirements. But you should go with the 1st approch. If you will use 2nd approch the you will have to refresh the page since you can not handle the response. So basically if you delete 5 items the page needs to be refreshed 5 times and you may not send more than 1 delete request at a time. Now If you use 1st approch since It's ajax and javascript you can display appropriate message depending upon the result and no need of unnecessary page refresh.Plus as you mentioned you can do validaton using Request class. So you can handle bad or malicious request. And I am sure CSRF won't be that much of a problem since you can check whether the request is ajax or not using Request::ajax(). So 1st approch is better mostly because of that no page refresh.
Both approaches are fine ;)
But 2nd approach would be better than first one; Using this approach you can prevent CSRF attacks too;
I would suggest you to use method 1 with certain modifications.
Use get request to delete the record.
Send a CSRF token and dont forget to encrypt your id for the record
add your delete URL to href
Then when you do ajax request, use the url from href and you could send some additional parameter like is_ajax=1, but laravel already checks for the jquery header so Request::isAjax() method will let you know if the request was an ajax request or normal request.
Now all you need to do is send different response for ajax and normal request.
HOPE THIS HELPS :D
Another drawback of your second approach which haven't been mentioned is displaying validation errors. Specifically from your edit and even your delete actions.
If you have multiple forms for each set of data showing errors from validation would be a pain. But if you follow approach number 2 just by getting the reference of the row element submitted, you could easily append an alert div if ever an error from validation has occurred.
as for the delete action, somebody else might have already delete some shared data so you might also want to tell the user somebody already threw this out.
I have a (hopefully) quick question regarding sessions. Whilst I have used sessions extensively, I have not used them in a situation whereby the values change depending on a users actions.
After logging in to my application, a user can select a company area, which has many levels of pages and folders. All of these pages will need this 'company_id'. At the moment I send the company_id via GET, but as I get deeper into the application this is becoming increasingly hard to maintain, with various other data being stored in the URL.
Therefore, when a user selects their company, I could set their company_id in $_SESSION array. However, when a user changes company, I would then need to change $_SESSION['company_id'] to the new value.
Is this a good use of sessions? I could potentially clean up my urls by using session data rather than always using GET, but I am unsure if this is a recommended way of using sessions.
Thanks in advance
This is a bad implementation of the HTTP design philosophy. All HTTP requests should be self contained, RESTful. All information needed to get a specific page should be present in the request itself (URL, headers and body), not dependent on hidden state.
Super trivial example: you can't copy a URL to someplace or someone else and have them see the same page. The content of the page is dependent on session state, which has been laboriously set through the visit history of several previous pages. To return to this same page, you need to retrace the same steps, recreating some hidden server-side state to arrive at the same page.
This gets even more complex and messier if you take into account that a visitor may want to open pages requiring different states in two or more simultaneous tabs/windows.
All this isn't to say that it can't work, only that it's hideously complex and will break the usual expected behaviour of browsers, unless you really bend over backwards to somehow prevent that.
If the many levels of pages and levels are per-company, you can put the company_id in a specific include file - this part of the site being dedicated to a given company.
However if they're shared by multiple companies, and this is probably what you want, this is potentially misleading, or even dangerous depending on the user actions, since the user may jump to a given page (link...) and access a page with unexpected data linked to a company which ID is provided by the session or cookie.
You could dynamically build the links on a page, based on IDs, to ensure consistency during the navigation from that page. Any direct "jump" to another part of the site will not carry the ID with it (and the page may offer to select a company).
Depending on your web server and if you have control over it you could build the URL having "company ID" as an element of the URL path, not the GET parameters
Eg
http://example.com/invoicing/company382/listprices.php
using a rewrite (web server configuration) to change the URL to be actually used to
http://example.com/invoicing/listprices.php?compid=company382
(URL not visible to the user) that informs of the company ID via the GET parameters.
I'm implementing a message system (private messaging, if you will) and I'd like to be able to display the list of messages a user has by a text link so I don't need a button to open it. The message_id (unique value in the databse) would be passed through the URL. (something like www.example.com/message/view/16).Assuming I check to make sure the session of the userid matches the userid that the message is sent to, is this OK? To make it safer I could just append a random number and set that as as session, and then just check for that upon viewing.
Should I forget this idea and just stick with a submit button to view the message?
A POST request would not provide any more safety than a GET request: any half-decent web debugging tool can forge POST requests. You should simply never trust user-input data. Always double-check authorizations for safety!
That said, GET request semantics match what you're trying to do here.
The HTTP standard says that a GET request should be repeatable without any non-trivial consequence. For instance, it's adequate to view data with a GET request (and possibly do small things like incrementing a counter, since these are pretty trivial consequences). In fact, GETand HEAD are the two request methods that are considered "safe".
On the other hand, POST requests are expected to have non-trivial consequences, like sending a message or placing an order. Stuff that you don't want to perform twice accidentally. Most browsers these days also respect this by warning users when reloading a page would cause a POST request to be performed again.
Using GET values for viewing messages is much better idea, because assuming a user stays logged in, it would allow them to bookmark messages, etc.
Just wondering if people think it is safe for a website to use a html link to allow users to mark their documents for deletion from their secure account page?
I have a website where users can create documents once they have registered and logged in to the website. To delete a document I include links on their account page for each document to be marked for deletion as follows :
http://www.examplewebsitename.com/delete_document.php?docid=5
The delete_document script makes sure the docid parameter is numeric, then checks using a session variable of their user id set when they logged in, wether this person actually created this document by looking up the user id of the creator of the document. If they where the creator, then it marks the document for deletion, otherwise if the current logged in person wasnt the creator then it doesnt mark the document for deletion and returns an error page.
Do you think this is a valid and safe way to mark documents for deletion, or should I be using a form and Post to do this more securely?
Three main concerns I can think of about using GET as a delete operation for your app.
Semantic reason, GET, according to http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html, should be an idempotent method
the side-effects of N > 0 identical requests is the same as for a single request.
More prone to CSRF, someone could post a link to http://www.examplewebsitename.com/delete_document.php?docid=5 and wrap the link into a harmless looking anchor
Click here for free puppy!
If by any chance the user is logged in and clicked on that link on his trusted website, it would inadvertently get the user to delete the document.
Browser addon / plugin that crawls web pages and cache links might accidentally crawl the link, opens it and again, delete the document without your user knowing.
Generally I advice against using GET requests to manipulate data because that's not what GET is designed to do if you stick to the HTTP Spec. If you would go completely restful you should be using a DELETE request but in most cases i use a confirmation page with a form that performs a POST request to delete the record.
Read Why should you delete using an HTTP POST or DELETE, rather than GET? for the reasoning behind this. It's been asked before in some other contexts.
The main reason is because GET is meant to be a safe method that is used for retrieval only:
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
User agents expect this method to have no side-effects:
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
This means GET should no cause any server-side state change.
Another reason, but rather a minor one, is that GET is easier to exploit that POST as there are more ways to trigger GET request than to trigger POST request. But no matter which method, both are vulnerable to CSRF attacks.
So if you make sure you’re protected against CSRF, you could even use GET for state changing requests.