I have a php script that I invoke via an ajax call with jQuery:
$.ajax({
contentType: "application/x-www-form-urlencoded; charset=UTF-8",
type: "POST",
url: "getFares.php",
data: someObjectHere,
success: function(data) {
handleSuccess(data);
},
dataType: "json"
});
Most of the time this request works just fine and the server sends the values that I would expect.
Sometimes however, the server just responds with a 303 SEE OTHER response. Nowhere in my php script is anything that could produce this redirect.
Unfortunately I have not been able to find any kind of pattern when the redirect happens. It appears like it only happens when I load the page, then wait for a bit and then invoke the ajax request, but this might be coincidence.
I know this is not a very helpful description but unfortunately I'm stuck here so I'm hoping that someone by luck knows how to fix it.
Here are a couple of screenshots of the dev tools that should illustrate the problem more clearly:
In this image you can see that I've made a couple of request to the script (getFares.php) and I've highlighted one that worked. You can see the status code is 200 and everything is fine.
Here I've highlighted a failed request. The response is 303 See other. As you can see, none of the other values of the request have changed
The only difference in the requests that I have been able to find is seen here. For requests that return the correct result (status 200), the type is "xhr" and for requests that result in the redirect are of type "x-www-form-urlencoded; charset=UTF-8". I don't know why this happens or where it comes from.
I assume the problem could be related to faulty server/php settings but it's difficult to search for this kind of error.
It is not ajax issue but the server itself sending response for some reason.
getFares.php page must be checked for some post parameters, if fulfills, sends 200 response otherwise some other response like 303.
You can check getFares.php code to get more idea.
Let's assume this is being executed in jQuery:
$.ajax({
url : 'ajaxcall.php',
data : { 'data' : { ary : [1,2,3,3,4,5], txt : "ima let u finish" } },
dataType : 'json',
type : 'post',
timeout : 10000
});
And ajaxcall.php contains:
$return_obj = array();
$return_obj['ary'] = array(9,8,7,6,5);
$return_obj['txt'] = "ok then";
echo json_encode($return_obj);
die();
I'm expecting the following situations to occur (due to packet loss, connection problems, etc):
Ajaxcall.php executes, but the $_POST variable is empty.
The promises of the $.ajax() call are executed, but the data returned to them is empty.
However, what I'm really worried about are situations like these:
Ajaxcall.php executes and $_POST['data']['txt'] has expected values but $_POST['data']['ary'] is missing some values.
The promises of the $.ajax() call are executed and data.ary has the expected values, but data.txt is only half a string (e.g., "ok t").
Are these situations possible?
In short: no, that's not possible. HTTP is based on TCP which guarantees delivery of data. Both the server and client would be aware of an issue that would cause some data to be missed. The TCP layer would retransmit the data as needed until it is complete.
Packet loss and out of order delivery are not uncommon the internet since there is no rule that says routers must forward all packets but TCP automatically corrects for those issues.
None of these situations should happen.
Packet loss is managed at a lower level of the protocol stack.
Over the internet, TCP takes care of integrity of each packet and that all the packets arrive properly and in the right order.
On a higher level of the protocol stack, HTTP has a response header called Content-Length that tells the browser the size of the returned content, it is used by the browser to make sure the response is complete.
Though, some HTTP requests can be answered with a Transfer-Encoding: chunked header that makes the Content-Length useless. These are persistent connections and are maily used when the length of the content is not known beforehand.
Do you have any example of cases where the data is not complete upon arrival?
I'm not sure if I really understand your question, however if the response is somehow truncated (but returned - ie, the server still responds with a 200 status OK) it won't be valid JSON.
It's unclear from your question how data.txt could be 'only have a string' and yet remain valid.
I have a noticed a strange phenomenon in my LAMP environment.
Over the frontend I execute an AJAX post request with jQuery like this:
$.post('save.php', {data1: d1, data2: d2, [...], dataN: dN})
The variables d1 to dN are collected from the website (e.g. from text inputs, textareas, checkboxes, etc.) with jQuery beforehand.
The file save.php takes the post parameters data1 to dataN and saves them in the database in one query.
The request takes about 500ms and works without problems unless I change pages (e.g. by clicking a link) during the request.
Normally, I would expect the request to be aborted and ignored (which would be fine) but (and this is the strange behaviour) the request seems to be completed but only with part of the data transmitted and thus saved.
That means for example, that the php script saves only data1 to data5 and sets data6 to dataN to empty.
The problem seems to be caused by the AJAX request already (not the php script) since fields $_POST['data6'] to $_POST['dataN'] are not set in php in this scenario.
So my questions:
Why does this happen (is this expected behaviour)?
How can I avoid it?
Update
The problem is neither jQuery nor php solely. jQuery collects the values correctly and tries to post them to php. I just validated it - it works.
The php script on the other hand handles everything it gets as expected - it just does not receive the whole request.
So the problem must be the interrupted request itself. Unlike I'd expect it does not abort or fail, it still transmits all the data until the cut off.
Then php gets this post data and starts handling it - obviously missing some information.
Update 2
I fixed the problem by adding a parameter eof after dataN and checking if it was set in php. This way I can be sure the whole request was transmitted.
Nevertheless this does not fix the source of the problem which I still don't understand.
Any help anyone?
Try the following actions to debug the problem:
Check post_max_size in your php settings and compare it with the data size you are posting.
User HTTP request builder, i.e. Use Fiddler to make an http request and check what it returns.
Use print_r($_POST); on the top of the save.php, to check what you are getting in it.
Use tool like Firebug to check what jQuery has posted.
You should also verify the json object on client side that you are posting. i.e. JSON.stringify(some_object);
Try posting some basic sample data { "data1":1, "data2":2, "data3":3, "data4":4, "data5":5 , "data6":6 }
Most probably you are sending to much data or likely data is invalid!
Edits:
Very Foolish act but lets say you posted count as well. so directly check isset($_POST['data'.$_POST['count']] )
I think we can rule out problems at the server site (unless it's some exotic or self-crafted server daemon), because nobody ever sends "end-of-data"-parameters with a HTTP POST request to make sure all data is really sent. This is handled by HTTP itself (see e.g. Detect end of HTTP request body). Moreover, I don't think that you have to check the Content-Length header when POSTing data to your server, simply because of the fact that nobody does this, ever. At least not in totally common circumstances like you describe them (sending Ajax POST through jQuery).
So I suppose that jQuery sends a syntactically correct POST, but it's cut off. My guess is that if you interrupt this data collecting by navigating to another page, jQuery builds an Ajax request out of the data which it was able to gather and sends a syntactically correct POST to your server, but with cut off data.
Since you're using Firebug, please go to its net tab and activate persist, so traffic data is not lost when navigating to another page. Then trigger your Ajax POST, navigate to another page (and thereby "interrupt" the Ajax call) and check in Firebug's net tab what data has actually been sent to the server by opening ALL the POST requests and checking the Headers tab (and inside this, the Request Headers tab).
My guess is that one of two things might happen:
You will see that the data sent to the server is cut off already in the headers being presented to you in Firebug's net tab and the Content-Length is calculated correctly according to the actual (cut off) length of the POST data. Otherwise, I'm sure the server would reject the request as Bad Request as a whole.
You will see that there are multiple POST requests, some of them (perhaps with the full, non-cut off data) actually interrupted and therefore never reaching the server, but at least one POST request (again, with the cut off data) that ist triggered by some other mechanism in your Javascript, i.e. not the trigger you thought, but by navigating to another page, more and other Ajax requests might be triggered (just a guess since I don't know your source code).
In either case, I think you'll find out that this problem ist client related and the server just processes the (incomplete, but (in terms of HTTP) syntactically valid) data the client sent to it.
From that point on, you could debug your Javascript and implement some mechanism that prevents sending incomplete data to your server. Again, it's hard to tell what to do exactly since I don't know the rest of your source code, but maybe there's some heavy action going on in collecting the data, and you could possibly make sure that the POST only happens if all the data is really collected. Or, perhaps you could prevent navigation until the Ajax request is completed or such things.
What might be interesting, if all of this doesn't make sense, would be to have a look at more of your source code, especially how the Ajax POST is triggered and if there are any other events and such if you navigate to another page. Sample data you're sending could also be interesting.
EDIT: I'd also like to point out that outputting data with console.log() might be misleading, since it's in no way guaranteed that this is the data actually being sent, it's just a logline which evaluates to the given output at the exact time when console.log() is called. That's why I suggested sniffing the network traffic, because then (and only then) you can be sure what is really being sent (and received).
Nonetheless, this is a little tricky if you're not used to it (and impossible if you use encrypted traffic e.g. by using HTTPS), so the Firebug net tab might be a good compromise.
You can verify the value of the Content-Length header being received by the PHP.
This value ought to have been calculated client side when running the POST query. If it does not match, that's your error then and there. And that's all the diagnostics you need - if the Content-Length does not match the POST data, reject the POST as invalid; no need of extra parameters (computing the POST data length might be a hassle, though). Also, you might want to investigate why does PHP, while decoding the POST and therefore being able to verify its length, nonetheless seems to accept a wrong length (maybe the information needed to detect the error is somewhere among the $_SERVER variables?).
If it does match though, and still data isn't arriving (i.e., the Content-Length is smaller, and correctly describes the cut-off POST), then it is proof that the POST was inspected after the cut-off, and therefore either the error is in the browser (or, unlikely, in jQuery) or there is something between the browser and the server (a proxy?) that is receiving an incomplete query (with Content-Length > Actual length) and is incorrectly rewriting it, making it appear "correct" to the server, instead of rejecting it out of hand.
Some testing of both the theory and the workaround
Executive summary: I got the former wrong, but the latter apparently right. See code below for a sample that works on my test system (Linux OpenSuSE 12.3, Apache).
I believed that a request with wrong Content-Length would be refused with a 400 Bad Request. I was wrong. It seems that at least my Apache is much more lenient.
I used this simple PHP code to access the key variables of interest to me
<?php
$f = file_get_contents("php://input");
print $_SERVER['CONTENT_LENGTH'];
print "\nLen: " . strlen($f) . "\n";
?>
and then I prepared a request with a wrong Content-Length sending it out using nc:
POST /p.php HTTP/1.0
Host: localhost
Content-Length: 666
answer=42
...and lo and behold, nc localhost 80 < request yields no 400 error:
HTTP/1.1 200 OK
Date: Fri, 14 Jun 2013 20:56:07 GMT
Server: Apache/2.2.22 (Linux/SUSE)
X-Powered-By: PHP/5.3.17
Vary: Accept-Encoding
Content-Length: 12
Content-Type: text/html
666
Len: 10
It occurred to me then that content length might well be off by one or two in case the request ended with carriage return, and what carriage return - LF? CRLF?. However, when I added simple HTML to be able to POST it from a browser
<form method="post" action="?"><input type="text" name="key" /><input type="submit" value="go" /></form>
I was able to verify that in Firefox (latest), IE8, Chrome (latest), all running on XP Pro SP3, the value of the Content-Length is the same as the strlen of php://input.
Except when the request is cut off, that is.
The only problem is that php://input is not always available even for POST data.
This leaves us still in a quandary:
IF THE ERROR IS AT THE NETWORK LEVEL, i.e., the POST is prepared and supplied with a correct Content-Length, but the interruption makes it so that the whole data is cut off as this comment by Horen seems to indicate:
So only the first couple of post parameters arrived, sometimes the value of one parameter was even interrupted in the middle
then really checking Content-Length will prevent PHP from handling an incomplete request:
<?php
if ('POST' == $_SERVER['SERVER_PROTOCOL'])
{
if (!isset($_SERVER['Content-Length']))
{
header($_SERVER['SERVER_PROTOCOL'] . ' 400 Bad Request', True, 400);
die();
}
if (strlen(file_get_contents('php://input'))!=(int)($_SERVER['Content-Length']))
{
header($_SERVER['SERVER_PROTOCOL'] . ' 400 Bad Request', True, 400);
die();
}
}
// ... go on
?>
ON THE OTHER HAND if the problem is in jQuery, i.e. somehow the interruption prevents jQuery from assembling the full POST, and yet the POST is made up, the Content-Length calculated of the incomplete data, and the packet sent off -- then my workaround can't possibly work, and the "telltale" extra field must be used, or... perhaps the .post function in jQuery might be extended to include a CRC field?
Post data looks just like GET:
Header1: somedata1\r\n
Header2: somedata2\r\n
...
HeaderN: somedataN\r\n
\r\n
data1=1&data2=2&...&dataN=N
When request is aborted, in some cases, last line may be passed only partially. So, here are some possible solutions:
Compare Content-Length and strlen($HTTP_RAW_POST_DATA)
Validate input data
Pass not so much data at one time
I have tried to recreate this problem using triggers, manually, changing server settings, doing my very best to #$%& things up, using different data sizes but I never ever got only half a request in PHP. Simply because apache will not invoke PHP untill the request is completely done. See this question about Reading “chunked” POST data in PHP
So the only thing that can go wrong is that Jquery only gathers part of the data and then makes a POST request. Using just $.post('save.php', data) as you mentioned, that wont happen. Its either working to gather the data or its waiting for a response from the server.
If you switch sites during the gathering, there wont be a request. And if you switch after the request has been made and you move away quickly, before all data has been transmitted, the apache server will see it as half a request and wont invoke PHP.
Some suggestions:
Is it possible that you are using seperate pages for succesfull and partial requests? Because PHP does only add the first 1000 elements to $_POST and perhaps the failed requests have more then data1000=data elements? So there wont be an EOF param.
Is it possible that you are gathering the data in a global var in javascript and have an onbeforeunload method that sends data as well? Because then there might only be half the data in the POST.
Can you share some information on the data you are seding? Are there a lot of small elements (like data1 till data10000) or a few large once?
Is it always the same element that you receive last? Like always data6 as you mention? Because if it is, the chances of a failed attempt always at the exact same dataN field would be very slim.
My problem was there were too many variables in one of my post objects.
PHP has a max_input_vars variable which is set to 1000 by default.
I added this line to my .htaccess file (since I don't have access to the php.ini file):
php_value max_input_vars 5000
Problem solved!
Can you check your host log at
/var/log/messages
Last time i had "missing"post variables at php i found that i was sending null ASCII chars and the server(CentOS) was considering it an attack, then dropping those specific variables... Took me a week to figure it out! This was the server log response:
suhosin[1173]: ALERT - ASCII-NUL chars not allowed within request variables - dropped variable 'data3' (attacker '192.168.0.37', file '/var/www/upload_reader.php')
If that is your problem, tyr to, with js, compress your variables, encode them with base64. Post them with ajax, then receive then at php, decode64 then uncompress! That solved for me ;)
Solved the problem by increasing max_input_vars in my server's php.ini file
Since I had more than 1000 variables in the array, only part of them was received by the server!
Hope this helps someone!
In order to find our more about what is happening, why not properly code you ajax request using jQuery's ajax function. Use all the callback functions to track what happened to your call or what came back? The element type is set to POST and the element data carries whatever object structure { ... } you like.
$.ajax({
url : "save.php",
type : "POST",
data : {
"ajax_call" : "SOME_CUSTOM_AJAX_REQUEST_REFERENCE",
"data1" : data1,
"data2" : data2,
"data2" : data2,
"dataN" : dataN
},
//dataType : "html", contentType: "text/html; charset=utf-8",
dataType : "json", contentType: "application/json; charset=utf-8",
beforeSend: function () {
//alert('before send...');
},
dataFilter: function () {
//alert('data filter...');
},
success: function(data, textStatus, jqXHR) {
//alert('success...');
var response = JSON.parse(jqXHR.responseText, true);
if (undefined != response.data) {
my_error_function();
}
my_response_function(response.data);
},
error: function(jqXHR, textStatus, errorThrown) {
//alert('error...');
},
complete: function (xhr, status) {
//alert('end of call...');
my_continuation_function();
}
});
Before send a request, set "onbeforepageunload" handler for document(to prohibit the transition to another page), and unbind after success.
To example:
$(document).on('unload', function(e){
e.preventDefault();
// Here you can display a message, you need to wait a bit
return false;
});
A guess - the suhosin function on your Ubuntu/Debian server causes the field to be cut off?
I have the same problem. POST size is about 650KB, and gets corrupted if closing browser window or refreshing page in case of ajax, before post is completed. ajax.success() is not fired, but partial data is posted to "save.php" with 200 OK status. $_SERVER's Content-length is of no use as suggested elswhere since it matches the actual content-length of the partial data.
I figured out 2 ways of overcoming this:
as you propose, append a post variable at the end. Seems to work, although it seems a bit risky.
make your "save.php" script save to a temporary db column, and then use the ajax.success() to call another php script, say savefinal.php, without passing any data, which transfers data from the temp column to the final column (or just flags this data as valid). That way if post is interrupted data will only reside on the temp column on the database (or will not be flagged as valid).
The .success() is not called if post is interrupted, so this should work.
I presume this is a jquery bug sending a wrong (very small) content-length to apache, and apache is forced to assume that the post request has completed, but I'm not really sure.
Why does this happen (is this expected behaviour)?
Was also looking for the cause of this strange behaviour when some variables from post were missing and came to this question where the very similar behaviour is explained with the slow connection of the web client sending a POST request with multipart/form-data.
Here goes the mentioned question:
I am facing a problem when a remote web client with slow connection
fails to send complete POST request with multipart/form-data content
but PHP still uses partially received data to populate $_POST array.
As a result one value in $_POST array can be incomplete and more
values can be missing.
See How to check for incomplete POST request in PHP
How can I avoid it?
There you can find the recommended solution, as well. Nevertheless, the same solution was already proposed by you.
You can add field <input type="hidden" name="complete"> (for example)
as the last parameter. in PHP check firstly whether this parameter was
sent from client. if this parameter sent - you can be sure that you
got the entire data.
How do ajax know whether it failed or succeeded if server side doesn't echo anything back?
$.ajax(error:..,success:..)
I met with this exception in my test:
uncaught exception: [Exception...
"Component returned failure code:
0x80040111 (NS_ERROR_NOT_AVAILABLE)
[nsIXMLHttpRequest.statusText]"
nsresult: "0x80040111
(NS_ERROR_NOT_AVAILABLE)" location:
"JS frame ::
http://localhost/script/tab.js ::
anonymous :: line 69" data: no]
The server side code is :
$id = process();
And for the purpose of testing,I have exit() in process();
Is that the reason for this exception?If so,why?
EDIT
I looked over to the line that cause exception,it's the error handling function of $.ajax()
error:function(XMLHttpRequest, textStatus, errorThrown){
alert(XMLHttpRequest.statusText);alert(textStatus);alert(errorThrown);
}
Anything wrong here?
The httprequest also returns a status such as 200 == ok, 404 == not found, 12152 == connection closed by server and so on..
Just read up on the status id's what they mean so you can look for them. you can also for debugging reasons just write out myhttprequest.status to the document and it shows what status it returned.
This depends on the status code the request returns. A successful request returns a status code in the range of 2xx, an error is in the range of 4xx of 5xx.
For more information see Wikipedia: List of HTTP status codes.
It would still get a response from the server, with the data part of it being empty. If it got no response at all, that would be an error.
http://docs.jquery.com/Ajax/jQuery.ajax#options
Give an option for success and error These functions will be called after the call is made.
There are four possible scenarios that you could get:
the server isn't there or refuses the connection (this is identifiable by the sockets library that the browser uses, which will report the connection failure)
the connection works and the server returns a non-success error code - this comes back in the header. Indeed, the request can succeed (200 code) even with an empty body, that's perfectly valid
the connection comes up but the server fails to respond - I'm not clear on the details of this, but i'd expect the JS to eventually time out because no response was received and return a failure based on that.
the connection comes up but the server responds incorrectly (e.g. no headers) - the JS implementation should return this as an error.
In any case, all the error scenarios are handled by the Javascript core's XMLHttpRequest implementation - jQuery just wraps it up with slightly tidier interface.
In your case (now you've provided more information) I would recommend that you use Firebug to see what the server response to your request is. That said, you shouldn't be getting an exception for anything inappropriate from the server, the JS should call the same error callback for all the above cases.
are you missing { } ?
$.ajax(error:..,success:..)
should be
$.ajax( { error: function( ){ } } );
if that's it, sorry dude, that would be annoying to have spent that much time on, haha
I fixed this by specifying a timeout in my ajax call. The ajax timeout was just giving up after X amount of time. Even though the page ended up returning data, the ajax object just gave up and bailed, but threw this error.
I build JQuery/JS/PHP/mySQL app with DB records management and need to provide reliable & complete feedback to the user on AJAX calls, modifying back end DB records. The problem IMHO is $.ajax success: and error: functions indicate just AJAX transport layer success and not the whole process. What if DB modification fail? How can one provide the complete feedback to the user?
I ended up with
$.ajax({
url: "/action/delete",
data: "rowid="+rowid,
complete: function(xmlHttp) {
if ( xmlHttp.responseText ) alert('Success - back end returned "success"');
else alert('failure - back end returned NULL')
}
});
and PHP response:
$success = deleteRecord( $_GET(rowid) );
if($success) {
print 'success';
} else {
print NULL;
}
exit();
The idea is simple - if I manage to get positive feedback from the back end, then the whole operation succeeded, if not - user don't care where problem occurred.
Thank you in advance, your feedback is highly appreciated.
If you respond to the request with some json data instead of just some new html to insert into the DOM, you can place whatever kinds of error codes and messages you like with the data. For example, if your response was something like...
{
errorstate: 0,
errormsg: "All systems are go",
displaytext: "stuff I want to display when all goes well"
}
Your javascript code can examine this data and do whatever it feels it needs to. It also allows you to push more of the error handling into your server script which can often be simpler.
Try http://docs.jquery.com/Ajax/jQuery.getJSON#urldatacallback
One possible solution would be to use the HTTP response code to signal a failure, like 200 OK, everything's ok, and 500 Internal Server Error on error, which you can simply check when you reach state 4.
In PHP I believe this is done through header("HTTP/1.0 200 Ok") before any other data is sent. If you're afraid data will be sent by mistake before you can evaluate the correct header to set you can turn on output buffering.
How you wish to present the data is of course up to you, you could for example on 500 just have document.getElementById("myerrorbox").innerHTML = xmlHttp.responseText, or similar, and render a partial html-document in your php-program.
I send status messages back to the client. Along with the error flag. And then my JavaScript code displays the message it got from the server, and colours the message according to the error flag.
I find that to be quite efficient.