I'm attempting to stream chunked POST data using sockets in PHP to a local server for testing. This works fine if I don't chunk the request entity body and provide a Content-Length header.
However, when I chunk the transfer as follows the server doesn't recognize the end of the message. What is wrong with the raw message below that is preventing the server from correctly recognizing that the message is complete?
POST / HTTP/1.1
HOST: localhost
CONTENT-TYPE: text/plain
USER-AGENT: testing
ACCEPT-ENCODING: gzip,deflate,identity
TRANSFER-ENCODING: chunked
36
When in the chronicle of wasted time
0
After last '0' there are 2xCRLF, so the last 5 bytes are: 0x30, 0x0D, 0x0A, 0x0D, 0x0A.
I've tried sending this request to both a local Apache server and PHP5.4's built-in testing server. Neither can determine that the request is complete and execution hangs until the socket times out.
The value should be in hex 36 → 24
Related
After building an *.exe we upload that file as base64encoded JSON String to an API Endpoint, which is written in PHP by using the Framework Laravel.
The JSON Payload looks like:
payload = {
'operating_system': 'windows',
'architecture': 'amd64',
'min_system_version': '10.0',
'file': {
'content': encode_base64_file(file),
'mime_type': 'application/exe',
'checksum': {
'type': 'sha256',
'sum': sha256_checksum(file)
}
}
}
On submit a file with approximately 280MB everything works like a charm. Since we upload a new version with now 680MB, the server (Plesk Obsidian v 18.0) closes the connection without response.
In use:
Apache 2.4
no Nginx Proxy
PHP 7.3 (Plattform will be updated soon)
PHP Settings (temporary for debugging)
Memory Limit = 4GB
Post Max Size = 4GB
Execution Time = 120
Debug output:
> User-Agent: curl/7.79.1
> Accept: */*
> Authorization:Basic XXXXXXXX
> Cookie: XDEBUG_SESSION=start
> Content-Type: application/json
> Content-Length: 646833265
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
* Empty reply from server
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (52) Empty reply from server
Last attempt was to just display the POST Request in a single PHP File (Example).
Same results
We expected that the file uploaded (via Laravel API) as normal and while debugging, that the Plesk Error Log show's at least an error (which doesn't happen). Laravel doesn't run into a catched error too.
Since we don't have full access to the server itself, we are limited in debugging that issue.
Does somebody knows, if there is any possible "size limitation" of a POST request related to such a situation?
I'm using Apache 2.2 and PHP 7.0.1. I force chunked encoding with flush() like in this example:
<?php
header('HTTP/1.1 200 OK');
echo "hello";
flush();
echo "world";
die;
And I get unwanted characters at the beginning and end of the response:
HTTP/1.1 200 OK
Date: Fri, 09 Sep 2016 15:58:20 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/7.0.9
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
a
helloworld
0
The first one is the chunk size in hex (10 = A). I'm using Klein as PHP router and I have found that the problem comes up only when the HTTP status header is rewritten. I guess there is a problem with my Apache config, but I wasn't able to figure it out.
Edited: My problem had nothing to do with Apache but Nginx and chunked_transfer_encoding directive. Check the answer below.
This is how Transfer-Encoding: chunked works. The extra characters you're seeing are part of the encoding, rather than the body.
A client that understands the encoding will not include them in the result; a client that doesn't doesn't support HTTP/1.1, and should be considered bugged.
As #Joe pointed out before, that is the normal behavior when Chunked transfer enconding is enabled. My tests where not accurate because I was requesting Apache directly on the server. Actually, when I was experiencing the problem in Chrome I was querying a Nginx service as a proxy for Apache.
By running tcpdump I realized that Nginx was rechunking responses, but only when rewritting HTTP status header (header('HTTP/1.1 200 OK')) in PHP. The solution to sending Transfer-Encoding: chunked twice is to set chunked_transfer_encoding off in the location context of my Nginx .php handler.
I have used http://www.webpagetest.org tool to check a web page, this indicates that keep-alive is not activated. I researched this and was led to How can I enable keep-alive? and Changing PHP $_SERVER['HTTP_CONNECTION'] value
Following from this I tried <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> in the .htaccess file. The http://www.webpagetest.org tool still indicated keep-alive is not activated.
I contacted the hosting company and they stated that Keep-Alive is enabled.
I made a bare bones html test file (call this test.html) that sought to load two images, one from the server that did not keep-alive (call this notalive) and the other from a server (call this alive) that the http://www.webpagetest.org tool indicated is keeping alive.
Results:
when checking test.html hosted on server notalive with the webpagetest tool the image on server notalive indicates that keep-alive is not activated, however for the image on server alive the tool indicates that it is keep-alive.
I then swopped the test.html over to server alive and the webpage tool indicated exactly as per the test above that is for the image on notalive server the tool says keep-alive is not activated and for the the image on server alive the tool says keep-alive is activated.
This led me to believe that since the html files are identical that my issue might be due to configuration of the server notalive.
I ran phpinfo() from both servers and retained lines that seemed to do with alive (based on the above stackoverflow postings) and have reproduced those lines below.
For the notalive server:
PHP Version 5.3.29
Configuration: apache2handler: Max Requests Per Child: 500 - Keep Alive: on - Max Per Connection: 100
Configuration: apache2handler:Timeouts Connection: 300 - Keep-Alive: 1
Apache Environment: HTTP_CONNECTION close
HTTP Headers Information: HTTP Request Headers: connection close
Connection keep-alive
PHP Variables: _SERVER["HTTP_CONNECTION"] close
For the alive server:
PHP Version 5.2.12
apache: Max Requests Per Child: 1000 - Keep Alive: on - Max Per Connection: 500
Timeouts Connection: 300 - Keep-Alive: 5
Apache Environment: HTTP_CONNECTION keep-alive
HTTP Headers Information: HTTP Request Headers: Connection keep-alive
HTTP Headers Information: HTTP Response Headers:
Keep-Alive timeout=5, max=500
Connection Keep-Alive
PHP Variables: _SERVER["HTTP_CONNECTION"] keep-alive
I would be most obliged if someone would look at the above and perhaps offer some guidance on how to activate keep-alive.
Thank you for taking the time to read this.
Sorted or at least a workaround.
Based on my test results the hosting company have concluded that the issue is due to their use of Apache web server and Varnish Cache.
They have moved the site over to Litespeed Server and the connections are now kept alive.
Thanks Blowski for the assistance.
I have an app which connects to my web server and transfers data via XML.
The headers I connect with are:
POST /app/API/Data/Receiver.php HTTP/1.1
User-Agent: Custom User Agent 1.0.0
Accept: text/xml
Content-Type: text/xml
Content-Length: 1580
Host: servername.com
The app then handles the data and returns its own XML formatted reply. One of the header's I'm setting in the response is:
header("Connection: close");
When I send connect and send my data from a simple app on my PC (C++) it works fine, I get the close header correctly and the connection is closed as soon as the data is available. When I send the exact same data using a GSM modem and and embedded app, the connection header comes back as:
header("Connection: keep-alive");
The GSM modem also sits and waits until the connection is closed before moving on and often just times out.
Is there someway to close the connection on the server so that the GSM side does not time out?
It is possible that your GSM service provider transparently proxing connections. Try to send data on non-standard port (i.e not 80, 8080, 443)
Also setting cache control header private might work.
Cache-Control: PRIVATE
Headers are just plain text but cannot be sent once data has been sent in PHP. Try this:
echo "\r\n\r\nConnection: close";
die();
and adjust to your needs
I am having some problems trying to get a post request to work from a payment provider (WorldPay) to my host server. Basically WorldPay does a callback to a script on my website if/when a transaction is successful. Problem is the post request isn’t getting to my script – we just get a 408 timeout.
This is the request sent from WorldPay below:
POST /index.php?route=payment/worldpay/callback HTTP/1.0
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Host: www.mysite.com
Content-Length: 711
User-Agent: WJHRO/1.0 (WorldPay Java HTTP Request Object)
authAmountString=%26%23163%3B3.49&_SP.charEnc=UTF-8&desc=testItem&authMode=A
And this is the response sent back from my hosts server:
HTTP/1.1 408 Request Timeout
Connection: Close
Pragma: no-cache
cache-control: no-cache
Content-Type: text/html; charset=iso-8859-1
I know this is a long shot but can anyone see anything wrong with anything above? To simplify things i replaced the php script with a basic html output which returned a hello world message and we still got a 408 so i’m pretty sure the script works. We have also had this error once or twice:
failed CAUSED BY invalid HTTP status line: >null<
Any help is greatly appreciated
Cheers
Paul
If the HTTP request you gave above is accurate, it seems as if the client is advertising a content length of 711 bytes, but the entity body does not seem to be 711 bytes long. That is why the server is timing out waiting for the rest of the data.
HTTP/1.1 408 Request Timeout,
pay attention to server config, if your host server is nginx, you can check "client_body_timeout" in nginx.conf