On a website, I enter some parameters in a form, click on search and then get a page with a message "retrieving your results". After the search is complete, I get another page with my results displayed.
I am trying to recreate this programatically and I used Live HTTP Headers to get a peek of what is going on behind i.e the url, form variables,etc. However, I'm only getting information of what goes on up to the page which shows "retrieving your results". Live HTTP Header is not giving me information up to the page which contains the final results.
What can I do to get this final bit of information (i.e the url, form variables, etc)
I use Charles HTTP Proxy for all my HTTP troubleshooting needs. It has a ton of options and works with any browser.
"Web Developer" does this:
https://addons.mozilla.org/en-US/firefox/addon/60
#Mark Harrison
I have webdeveloper installed. Initially, I used it to turn off meta-redirects and referrers to get a clearer picture of the http interaction. But when i do this, the website does not work (i.e it is not able to complete the process of retrieving my search results) so i turned it back on.
I'm wondering if anyone has had to capture http information for a site that has a processing page in between the user input page and the results page
That sounds weird? I'm pretty sure that LiveHttpHeaders should show this. Can you double check that you aren't missing something? Otherwise try with Firebug. It has a tab for "network", which shows all requests made.
I'm using Fiddler2, which is a free (as in beer), highly configurable proxy; works with all browsers, allows header inspection/editing/automodification on request/response.
Disclaimer: I'm in no way affiliated with Fiddler, just a (very happy) user.
I for such problems always fire-on an Ethereal or similar network spying tool, to see exactly, what is going on.
The document is creating a browser component called XMLHTTPRequest , on submit event the object method send() is called, during the waiting time for server response an html element is replaced with a "Waiting message" on succesfull response a callback is called with the new html elements and then inserted in the selected html element. (That's called ajax).
If you want to follow that process you can use Firefox Live HTTP Headers Extension , or Wireshark to view full HTTP headers and actions (get/post/).
Related
I use Firefox, and there is a tool called Developer -> Web Console. It shows more information about GET/POST requests, which page we connected with XMLHttp, request headers and information like this. I'm sure there is a similar feature on Chrome too.
There is an AJAX call on a page and I have to see what it returns to the website. It is just a GET request and returns JSON. When I manually request that page (e.g ajax/view/my_purchases.php) it just shows a blank page, but when the website requests it with AJAX, I can see what content it returned in HTML.
Basically, What kind of tools I can use to see it?
It can be a standalone application, Chrome or a Firefox extension. I'm okay with any.
Firebug is the most used option. Net panel has "XHR" tab that you can use, and chrome tools were actually based on that.
on chrome you have the "network" tab, all requests and responses are there,
also you can put a breakpoint in the "success" function and watch the response.
p.s
chrome has this console without any extentions
Using Firebug on Firefox -> Console Tab (maybe aktivate "show XMLHttpRequests")
Edit: nicer than on Chrome you can open the request directly in a new tab.
Using the 'Net' tab, you can view every single external request that the page is calling when it loads. Your AJAX request will be in here. One thing to look out for is any GET or POST data being sent to the external script.
Many scripts will only provide access to its main algorithm or data logic if it expects certain parameters. A very basic one might be:
<?php
if (!isset($_POST['submit'])) {
die;
}
// Rest of functionality...
This makes it harder for people to gain access to sensitive data. If your AJAX call is sending POST data, you probably won't gain access. And if you do, you'll be committing CSRF.
There might be some cases that your request takes long time because
of some problems with your client internet connection or your server
connection. So since the client doesn't want to wait he clicks on the Ajax
link again which sends the request to the server again which messes up
the following:
Rendering of our website in the browser because we are giving extra
load to the browser.
What if the second request processed correctly and you showed user
the page and then comes along the error message from your first
request(saying request timed out) which loads above on the correct
content and mess up with the user reading the correct content.
I want to stop the 1st Ajax response if the Ajax function is called twice. How do I do this?
so i want to stop the 1st Ajax response if the Ajax function is called
twice
What you actually want is to prevent a second request when a first request is in progress.
For example, You may have to change the Save button to Saving..., disable it (and add a little progress wheel) to give live feedback to the user. (Facebook does this)
The key is love feedback to the user. If the user is clueless on what is going on, they are going to think nothing is happening.
You might want to check why the operation is taking long
If this is a complex/time consuming operation, like, say a report generation or a file upload, a progress bar should do
If this is because of the client's internet connection, say it up front, like Gmail: Your have a slow Internet connection and this site may be slow. Better still, provide a fallback option, with less/no Ajax.
You say cause we are giving extra load to the browser: this is kind of fishy. You will not be giving extra load to the browser unless you are giving it tons of HTML to render. Use Ajax only for small updates on the browser. You may want to reload the page if you expect a large change.
How bout seeing as your using some form of JavaScript to begin with you have that link either hidden or disabled in a manor of speaking til the pages request has been followed through with. You could for example have the requested ajax wait for a second variable that would enable that link so a user could click it. But until that variable is received from the original AJAX request its disabled to the click, then if it is clicked it disables again and waits for the same variable to come back again.
Think of how some developers disable a submit button on a form so a user can't double submit there form.. same concept of notion here.
i analyzed a page that i'm working on with live http headers addon for FF and im seeing that is making request to the same page 2-3 time during the load of it. i checked for empty img tags that could be the reason but there is none on the html code. is there any addon out there or something that could let me track what resource is firing this request? or any other recommendation of what is should look apart from the empty img tags.
the page have several javascript libraries and mootools with ajax calls, but none of them is causing it as they load the resources and the ajax calls are get request passing several parameters to the page, this unexpected request are clean one, whiteout parameters.
Regards,
Shadow.
SOLUTION
monitoring from the server side the request and commenting code, i realized that was making 3 request's
1) from a popup blocker checker that open an about:blank in theory but it was requesting the same page
2) a class used to check if flash was enabled
3) the yslow plugin for FF was making the last one.
i'm worried about this last one as it screw up a feature in my app, so i will need to have some alternative thing here :)
if you have access to server logs, you can find the answers there.
finally, you can use some sniffer software to see all http request sending by browser
I have a site that is using frames. Is it still possible from the browser for someone to craft post data for one of the frames using the address bar? 2 of the frames are static and the other frame has php pages that communicate using post. And it doesn't appear to be possible but I wanted to be sure.
No, it is not possible to POST data from the address bar. You can only initiate GET requests from there by adding params to the URL. The POST Body cannot be attached this way.
Regardless of this, it is very much possible to send POST requests to your webserver for the pages in a frame. HTTP is just the protocol with which your browser and webserver talk to each other. HTTP knows nothing about frames or HTML. The page in the frame has a URI, just like any other page. When you click a link, your browser asks the server if it has something for that URI. The server will check if it has something for that URI and respond accordingly. It does not know what it will return though.
With tools like TamperData for Firefox or Fiddler for IE anyone can tinker with HTTP Requests send to your server easily.
Any data in the $_REQUEST array should be considered equally armed and dangerous regardless of the source and/or environment. This includes $_GET, $_POST, and $_COOKIE.
POST data can not be added in the address bar.
You should always check & sanitize all data you get in your PHP code, because anyone could post data to all of your pages.
Don't trust data from outside of your page. Clean it & check it.
Maybe not from the browser, but they can still catch the request (tinker with it) and forward it to the provided destination, with a tool like burp proxy.
To answer your question: No, it is not possible to send post data using the addressbar.
BUT it is possible to send post data to any url in a snap. For example using cURL, or a Firefox extension. So be sure to verify and sanitize all the data you receive no matter if POST or GET or UPDATE or whatever.
This is not iFrame or php specific, so it should be considered in every webapplication. Never ever rely on data send by anyone being correct, valid or secure - especially when send by users.
Yes, they absolutely can, with tools like Firebug, and apparently more specialized tools like the ones listed by Gordon. Additionally, even if they couldn't do it in the browser from your site, they could always create their own form, or submit the post data through scripting or commandline tools.
You absolutely cannot rely on the client for security.
I have an application that supplies long list of parameters to a web page, so I have to use POST instead of GET. The problem is that when page gets displayed and user clicks the Back button, Firefox shows up a warning:
To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier.
Since application is built in such way that going Back is a quite common operation, this is really annoying to end users.
Basically, I would like to do it the way this page does:
http://www.pikanya.net/testcache/
Enter something, submit, and click Back button. No warning, it just goes back.
Googling I found out that this might be a bug in Firefox 3, but I'd like to somehow get this behavior even after they "fix" it.
I guess it could be doable with some HTTP headers, but which exactly?
See my golden rule of web programming here:
Stop data inserting into a database twice
It says: “Never ever respond with a body to a POST-request. Always do the work, and then respond with a Location: header to redirect to the updated page so that browser requests it with GET”
If browser ever asks user about re-POST, your web app is broken. User should not ever see this question.
One way round it is to redirect the POST to a page which redirects to a GET - see Post/Redirect/Get on wikipedia.
Say your POST is 4K of form data. Presumably your server does something with that data rather than just displaying it once and throwing it away, such as saving it in a database. Keep doing that, or if it's a huge search form create a temporary copy of it in a database that gets purged after a few days or on a LRU basis when a space limit is used. Now create a representation of the data which can be accessed using GET. If it's temporary, generate an ID for it and use that as the URL; if it's a permanent set of data it probably has an ID or something that can be used for the URL. At the worst case, an algorithm like tiny url uses can collapse a big URL to a much smaller one. Redirect the POST to GET the representation of the data.
As a historical note, this technique was established practice in 1995.
One way to avoid that warning/behavior is to do the POST via AJAX, then send the user to another page (or not) separately.
I have been using the Session variable to help in this situation. Here's the method I use that has been working great for me for years:
//If there's something in the POST, move it to the session and then redirect right back to where we are
if ($_POST) {
$_SESSION['POST']=$_POST;
redirect($_SERVER["REQUEST_URI"]);
}
//If there's something in the SESSION POST, move it back to the POST and clear the SESSION POST
if ($_SESSION['POST']) {
$_POST=$_SESSION['POST'];
unset($_SESSION['POST']);
}
Technically you don't even need to put it back into a variable called $_POST. But it helps me in keeping track of what data has come from where.
I have an application that supplies long list of parameters to a web page, so I have to use POST instead of GET. The problem is that when page gets displayed and user clicks the Back button, Firefox shows up a warning:
Your reasoning is wrong. If the request is without side effects, it should be GET. If it has side effects, it should be POST. The choice should not be based on the number of parameters you need to pass.
As another solution you may stop to use redirecting at all.
You may process and render the processing result at once with no POST confirmation alert. You should just manipulate the browser history object:
history.replaceState("", "", "/the/result/page")
See full or short answers