I notoced a strage behaviour at one of the websites I work on: trying to make an insert intro the database I saw that there were being inserted more than one(as I was expecting) row. After some attempts of indentifying the problem I made a test by creating a session variable (an array) in the index.php an pushed one value. At the first load of the page the session array printed one value, but on the next reloads it printed 6 values a time.
I mention that i had a .htaccess file, wich I suspected, but now it is empty, so it shouldn't be the problem.
Did anyone get this strange behavior? How can I fix it?
Thx!
You can always have full control of any page reload.
Just keep an eye on the HTTP interchange while querying your server.
Use either Fiddler HTTP proxy or Firebug or any other addon with HTTP sniffer capability.
So, you will see every HTTP request your browser do as well as every HTTP response your server returns.
Related
I've seen a lot in coding in my life, but nothing like this yet.
Does someone have an idea what practice/tool this is, or keywords relating to it?
My first guess was something related to kerberos/kdc, but doesn't seem true.
I have a webserver serving http with unknown/lost source code.
Behaviors:
When loading the pages for the first time, the webserver request a 301 reload to a certain kdc server (see url below) and sets two permanent cookies with identical values (PRXY_ID,PRXY_SN), and one session-temporary when visiting the login page (LOGIN)
Every time when loading the login page, form element id's (like username/pw) change in no visible pattern. My guess was that either the server randomly assigns these, and keeps track in a session-database with the the translations to the actual internal element id, or it is somewhat encrypted with the login-cookie, which is server-assigned - but since this cookie stays the same during the session reloads and not the id's, it would have to be salted before encrypted.
For every page load, the server attached certain query parameters to its url:
sample.com/kdcs/s56/HOME?SH=410488;SPP=353555;R=322450
The server keeps track of the transaction order. For example you can't send a logout post request, without first loading the profile page, with its logout button visible.
I'm grateful for any help on this.
I've got some PHP which handles a GET request via a query string. Once processing on that query string is done, it generates a page with the results.
So far so good. But the url in the browser keeps the query string, so e.g. if you hit reload, it again tries to process the GET.
So I'd like to generate the page, but return without the query string. I've tried setting the header() to the URL-minus-query, but that redirects (i.e. reloads) the page, rather than returning directly.
I'd think this is a common and easy task, but I can't find a solution...!
In case anyone runs across this post...
For the general case, given the difficulty of changing the url server-side, it's easiest to go through the effort to make it a POST, or use AJAX.
update: But in my case, it's a page where the user can change account information, which then needed to be reflected on that page. The answer in this case is simple: do the database updates first, then just redirect to the same page: parse_url($_SERVER['REQUEST_URI'],PHP_URL_PATH)
Regular PHP processing handles the rest.
The core problem is that you absolutely MUST NOT use GET for processing anything. GET should be safe and refreshing should have no side-effect. Generally the way people solve this is by:
Doing some processing with a POST request
Redirect to a 'results' page.
The most correct redirect status code for this is 303 See Other, but must frameworks will use 302 Found.
I have a PHP application that I have been having some problems with, some pages take a very long time to load.
After a couple of hours I have figured out the problem, but I have no idea how to fix it.
The problem seems to be with the header Connection: keep-alive. I used a Firefox plugin called "Tamper Data" which allows you to "tamper" with the headers and stuff. Once I used that tool to change the connection header to Connection: close the delay on some pages stopped.
How, in PHP, can I make sure that the Connection: close header is used?
I tried putting header("Connection: close"); at the top of a PHP file, and reloaded the page. It still sends the Connection: keep-alive header, not the one I am trying to send.
How can I achieve what I am trying to do?
EDIT: I have just realized that on this subdomain the content-length header is not sent at all for most pages. It is only sent right after a form submission followed by a redirect.
EDIT 2:
This is the page: http://volunteer.essentialtransit.com/job/13/just-a-test-at-eta/
Click the "Apply now" link and fill out some random txt, you don't need to attach a file. Notice when you are redirected back to the "job" detail page that it will take a very long time to load.
Your problem has nothing to do with connection states. It might seem related to connections because Apache automatically spawns a new child thread for each new request originating from a different source. With keep-alive, it will attempt to reuse the previous thread, which is busy from a PHP script (from your application). It's a little more complicated actually but this is the basic. Just note that "Connection: Close" is being sent, but it's supposed to close the connection only after the script has finished (sent all buffers out).
Now I'm going to tell you how to debug your script. I'll do this because if you don't fix your problem and you gain more traffic, your host will kick you out for extreme resource usage.
So:
Append set_time_limit(5) or higher to confirm there's a background script problem
Check for requests to local resources, requests that would only work on your staging server (you can use WireShark for this)
Check for external requests, cURL, file_get_contents() calls, anything with a timeout
Benchmark and optimize lengthy scripts (you can try xdebug for this)
Log all PHP notices, warnings and errors to a file; you should get at most zero errors
Finally, it's a good practice to triple-check your entire application. One for data entry, second for data operations and third for modules interconnection. But you should focus on AJAX background scripts that can't return output
Of course, skip anything that doesn't apply.
So, I determined what the problem was, and found a work-around to this issue.
The script that processed the form just processed the input and redirected to another page, but it actually didn't output anything. On most pages on the site the content-length header is either not sent, or is set at the correct value. But for some reason when posting to a page, and then redirecting without the processing script outputting anything to the browser, the content-length was being set at 0.
I tried setting the content-length myself, but didn't have much luck, as it didn't seem to make a difference.
So, all I did was make the processing script have some output. So now when the form is submitted the processing script outputs a page with a redirect script (and a 'click to continue' message just in case) that leads to the correct page. So while this adds a very brief delay between the form submission and the correct page being seen it causes the content-length to be set correctly and the problem is solved.
While this is not an ideal solution it is manageable and makes the script work.
I have an application that supplies long list of parameters to a web page, so I have to use POST instead of GET. The problem is that when page gets displayed and user clicks the Back button, Firefox shows up a warning:
To display this page, Firefox must send information that will repeat any action (such as a search or order confirmation) that was performed earlier.
Since application is built in such way that going Back is a quite common operation, this is really annoying to end users.
Basically, I would like to do it the way this page does:
http://www.pikanya.net/testcache/
Enter something, submit, and click Back button. No warning, it just goes back.
Googling I found out that this might be a bug in Firefox 3, but I'd like to somehow get this behavior even after they "fix" it.
I guess it could be doable with some HTTP headers, but which exactly?
See my golden rule of web programming here:
Stop data inserting into a database twice
It says: “Never ever respond with a body to a POST-request. Always do the work, and then respond with a Location: header to redirect to the updated page so that browser requests it with GET”
If browser ever asks user about re-POST, your web app is broken. User should not ever see this question.
One way round it is to redirect the POST to a page which redirects to a GET - see Post/Redirect/Get on wikipedia.
Say your POST is 4K of form data. Presumably your server does something with that data rather than just displaying it once and throwing it away, such as saving it in a database. Keep doing that, or if it's a huge search form create a temporary copy of it in a database that gets purged after a few days or on a LRU basis when a space limit is used. Now create a representation of the data which can be accessed using GET. If it's temporary, generate an ID for it and use that as the URL; if it's a permanent set of data it probably has an ID or something that can be used for the URL. At the worst case, an algorithm like tiny url uses can collapse a big URL to a much smaller one. Redirect the POST to GET the representation of the data.
As a historical note, this technique was established practice in 1995.
One way to avoid that warning/behavior is to do the POST via AJAX, then send the user to another page (or not) separately.
I have been using the Session variable to help in this situation. Here's the method I use that has been working great for me for years:
//If there's something in the POST, move it to the session and then redirect right back to where we are
if ($_POST) {
$_SESSION['POST']=$_POST;
redirect($_SERVER["REQUEST_URI"]);
}
//If there's something in the SESSION POST, move it back to the POST and clear the SESSION POST
if ($_SESSION['POST']) {
$_POST=$_SESSION['POST'];
unset($_SESSION['POST']);
}
Technically you don't even need to put it back into a variable called $_POST. But it helps me in keeping track of what data has come from where.
I have an application that supplies long list of parameters to a web page, so I have to use POST instead of GET. The problem is that when page gets displayed and user clicks the Back button, Firefox shows up a warning:
Your reasoning is wrong. If the request is without side effects, it should be GET. If it has side effects, it should be POST. The choice should not be based on the number of parameters you need to pass.
As another solution you may stop to use redirecting at all.
You may process and render the processing result at once with no POST confirmation alert. You should just manipulate the browser history object:
history.replaceState("", "", "/the/result/page")
See full or short answers
On a website, I enter some parameters in a form, click on search and then get a page with a message "retrieving your results". After the search is complete, I get another page with my results displayed.
I am trying to recreate this programatically and I used Live HTTP Headers to get a peek of what is going on behind i.e the url, form variables,etc. However, I'm only getting information of what goes on up to the page which shows "retrieving your results". Live HTTP Header is not giving me information up to the page which contains the final results.
What can I do to get this final bit of information (i.e the url, form variables, etc)
I use Charles HTTP Proxy for all my HTTP troubleshooting needs. It has a ton of options and works with any browser.
"Web Developer" does this:
https://addons.mozilla.org/en-US/firefox/addon/60
#Mark Harrison
I have webdeveloper installed. Initially, I used it to turn off meta-redirects and referrers to get a clearer picture of the http interaction. But when i do this, the website does not work (i.e it is not able to complete the process of retrieving my search results) so i turned it back on.
I'm wondering if anyone has had to capture http information for a site that has a processing page in between the user input page and the results page
That sounds weird? I'm pretty sure that LiveHttpHeaders should show this. Can you double check that you aren't missing something? Otherwise try with Firebug. It has a tab for "network", which shows all requests made.
I'm using Fiddler2, which is a free (as in beer), highly configurable proxy; works with all browsers, allows header inspection/editing/automodification on request/response.
Disclaimer: I'm in no way affiliated with Fiddler, just a (very happy) user.
I for such problems always fire-on an Ethereal or similar network spying tool, to see exactly, what is going on.
The document is creating a browser component called XMLHTTPRequest , on submit event the object method send() is called, during the waiting time for server response an html element is replaced with a "Waiting message" on succesfull response a callback is called with the new html elements and then inserted in the selected html element. (That's called ajax).
If you want to follow that process you can use Firefox Live HTTP Headers Extension , or Wireshark to view full HTTP headers and actions (get/post/).