After building an *.exe we upload that file as base64encoded JSON String to an API Endpoint, which is written in PHP by using the Framework Laravel.
The JSON Payload looks like:
payload = {
'operating_system': 'windows',
'architecture': 'amd64',
'min_system_version': '10.0',
'file': {
'content': encode_base64_file(file),
'mime_type': 'application/exe',
'checksum': {
'type': 'sha256',
'sum': sha256_checksum(file)
}
}
}
On submit a file with approximately 280MB everything works like a charm. Since we upload a new version with now 680MB, the server (Plesk Obsidian v 18.0) closes the connection without response.
In use:
Apache 2.4
no Nginx Proxy
PHP 7.3 (Plattform will be updated soon)
PHP Settings (temporary for debugging)
Memory Limit = 4GB
Post Max Size = 4GB
Execution Time = 120
Debug output:
> User-Agent: curl/7.79.1
> Accept: */*
> Authorization:Basic XXXXXXXX
> Cookie: XDEBUG_SESSION=start
> Content-Type: application/json
> Content-Length: 646833265
> Expect: 100-continue
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
* Empty reply from server
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (52) Empty reply from server
Last attempt was to just display the POST Request in a single PHP File (Example).
Same results
We expected that the file uploaded (via Laravel API) as normal and while debugging, that the Plesk Error Log show's at least an error (which doesn't happen). Laravel doesn't run into a catched error too.
Since we don't have full access to the server itself, we are limited in debugging that issue.
Does somebody knows, if there is any possible "size limitation" of a POST request related to such a situation?
Related
Morning all
Basically, I am unable to make successful cURL requests to internal and external servers from my Windows 7 development PC because of an issue involving a proxy server. I'm running cURL 7.21.2 thru PHP 5.3.6 on Apache 2.4.
Here's a most basic request that fails:
<?php
$curl = curl_init('http://www.google.com');
$log_file = fopen(sys_get_temp_dir() . 'curl.log', 'w');
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_VERBOSE => TRUE,
CURLOPT_HEADER => TRUE,
CURLOPT_STDERR => $log_file,
));
$response = curl_exec($curl);
#fclose($log_file);
print "<pre>{$response}";
The following (complete) response is received.
HTTP/1.1 400 Bad Request
Date: Thu, 06 Sep 2012 17:12:58 GMT
Content-Length: 171
Content-Type: text/html
Server: IronPort httpd/1.1
Error response
Error code 400.
Message: Bad Request.
Reason: None.
The log file generated by cURL contains the following.
* About to connect() to proxy usushproxy01.unistudios.com port 7070 (#0)
* Trying 216.178.96.20... * connected
* Connected to usushproxy01.unistudios.com (216.178.96.20) port 7070 (#0)
> GET http://www.google.com HTTP/1.1
Host: www.google.com
Accept: */*
Proxy-Connection: Keep-Alive
< HTTP/1.1 400 Bad Request
< Date: Thu, 06 Sep 2012 17:12:58 GMT
< Content-Length: 171
< Content-Type: text/html
< Server: IronPort httpd/1.1
<
* Connection #0 to host usushproxy01.unistudios.com left intact
Explicitly stating the proxy and user credentials, as in the following, makes no difference: the response is always the same.
<?php
$curl = curl_init('http://www.google.com');
$log_file = fopen(sys_get_temp_dir() . 'curl.log', 'w');
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_VERBOSE => TRUE,
CURLOPT_HEADER => TRUE,
CURLOPT_STDERR => $log_file,
CURLOPT_PROXY => 'http://usushproxy01.unistudios.com:7070',
CURLOPT_PROXYUSERPWD => '<username>:<password>',
));
$response = curl_exec($curl);
#fclose($log_file);
print "<pre>{$response}";
I was surprised to see an absolute URL in the request line ('GET ...'), but I think that's fine when dealing with proxy servers - according to the HTTP spec.
I've tried all sorts of combinations of options - including sending a user-agent, following this and that, etc, etc - having been through Stack Overflow questions, and other sites, but all requests end in the same response.
The same problem occurs if I run the script on the command line, so it can't be an Apache issue, right?
If I make a request using cURL from a Linux box on the same network, I don't experience a problem.
It's the "Bad Request" thing that's puzzling me: what on earth is wrong with my request? Do you have any idea why I may be experiencing this problem? A Windows thing? A bug in the version of PHP/cURL I'm using?
Any help very gratefully received. Many thanks.
You might be looking at an issue between cURL (different versions between Windows and Linux) and your IronPort version. In IronPort documentation:
Fixed: Web Proxy uses the Proxy-Connection header instead of the
Connection header, causing problems with some user agents
Previously, the Web Proxy used the Proxy-Connection header instead of the
Connection header when communicating with user agents with explicit
forward requests. Because of this, some user agents, such as Real
Player, did not work as expected. This no longer occurs. Now, the Web
Proxy replies to the client using the Connection header in addition to
the Proxy-Connection header. [Defect ID: 46515]
Try removing the Proxy-Connection (or add a Connection) header and see whether this solves the problem.
Also, you might want to compare the cURL logs between Windows and Linux hosts.
I have an app which connects to my web server and transfers data via XML.
The headers I connect with are:
POST /app/API/Data/Receiver.php HTTP/1.1
User-Agent: Custom User Agent 1.0.0
Accept: text/xml
Content-Type: text/xml
Content-Length: 1580
Host: servername.com
The app then handles the data and returns its own XML formatted reply. One of the header's I'm setting in the response is:
header("Connection: close");
When I send connect and send my data from a simple app on my PC (C++) it works fine, I get the close header correctly and the connection is closed as soon as the data is available. When I send the exact same data using a GSM modem and and embedded app, the connection header comes back as:
header("Connection: keep-alive");
The GSM modem also sits and waits until the connection is closed before moving on and often just times out.
Is there someway to close the connection on the server so that the GSM side does not time out?
It is possible that your GSM service provider transparently proxing connections. Try to send data on non-standard port (i.e not 80, 8080, 443)
Also setting cache control header private might work.
Cache-Control: PRIVATE
Headers are just plain text but cannot be sent once data has been sent in PHP. Try this:
echo "\r\n\r\nConnection: close";
die();
and adjust to your needs
I'm attempting to stream chunked POST data using sockets in PHP to a local server for testing. This works fine if I don't chunk the request entity body and provide a Content-Length header.
However, when I chunk the transfer as follows the server doesn't recognize the end of the message. What is wrong with the raw message below that is preventing the server from correctly recognizing that the message is complete?
POST / HTTP/1.1
HOST: localhost
CONTENT-TYPE: text/plain
USER-AGENT: testing
ACCEPT-ENCODING: gzip,deflate,identity
TRANSFER-ENCODING: chunked
36
When in the chronicle of wasted time
0
After last '0' there are 2xCRLF, so the last 5 bytes are: 0x30, 0x0D, 0x0A, 0x0D, 0x0A.
I've tried sending this request to both a local Apache server and PHP5.4's built-in testing server. Neither can determine that the request is complete and execution hangs until the socket times out.
The value should be in hex 36 → 24
I want to set headers like post variable name and the value and send and expect response.
this is also a security question, assume I want to send post a form variables of which are action="delete" and userid=100, and lets say, I have found a file which accepts ajax requests.
curl is your friend! :)
Say you've noticed an endpoint at example.org/process.php that a form is posting to. You can tailor you own custom request easily from the command line using curl.
$ curl -X POST --data "action=delete&userid=100" example.org/process.php
The --data or -D flag lets you pass arbitrary POST data just as an HTML form would. You can also set HTTP request headers with equal ease:
$ curl --header "User-Agent: Mosaic" example.org/process.php
You can see exactly what's happening with the -v (for verbose) flag. For the first example above the output is:
* About to connect() to example.org port 80 (#0)
* Trying 192.0.43.10... connected
* Connected to example.org (192.0.43.10) port 80 (#0)
> POST /process.php HTTP/1.1
> User-Agent: curl/7.21.6 (x86_64-apple-darwin10.5.0) libcurl/7.21.6 OpenSSL/1.0.0d zlib/1.2.5 libidn/1.22
> Host: example.org
> Accept: */*
> Content-Length: 24
> Content-Type: application/x-www-form-urlencoded
>
* HTTP 1.0, assume close after body
< HTTP/1.0 302 Found
< Location: http://www.iana.org/domains/example/
< Server: BigIP
* HTTP/1.0 connection set to keep alive!
< Connection: Keep-Alive
< Content-Length: 0
<
* Connection #0 to host example.org left intact
* Closing connection #0
If you're using a *NIX operating system including Mac OS X, you probably already have curl, just open a shell. If you work with Ruby at all, I highly recommend curb, a set of bindings for that language. Most PHP installations come with curl support, although the interface is pretty horrible. The docs are over at php.net.
You can use CURL library for this. Check about more info
You can send data POST/GET method, upload file, SSL support, cookie support, ftp support and much more
You might also want to take a look at Snoopy (http://sourceforge.net/projects/snoopy/)
Its a PHP class designed to act as a web browser with lots of useful functionality like simulating HTTP requests, manipulating form data etc.
I am having some problems trying to get a post request to work from a payment provider (WorldPay) to my host server. Basically WorldPay does a callback to a script on my website if/when a transaction is successful. Problem is the post request isn’t getting to my script – we just get a 408 timeout.
This is the request sent from WorldPay below:
POST /index.php?route=payment/worldpay/callback HTTP/1.0
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Host: www.mysite.com
Content-Length: 711
User-Agent: WJHRO/1.0 (WorldPay Java HTTP Request Object)
authAmountString=%26%23163%3B3.49&_SP.charEnc=UTF-8&desc=testItem&authMode=A
And this is the response sent back from my hosts server:
HTTP/1.1 408 Request Timeout
Connection: Close
Pragma: no-cache
cache-control: no-cache
Content-Type: text/html; charset=iso-8859-1
I know this is a long shot but can anyone see anything wrong with anything above? To simplify things i replaced the php script with a basic html output which returned a hello world message and we still got a 408 so i’m pretty sure the script works. We have also had this error once or twice:
failed CAUSED BY invalid HTTP status line: >null<
Any help is greatly appreciated
Cheers
Paul
If the HTTP request you gave above is accurate, it seems as if the client is advertising a content length of 711 bytes, but the entity body does not seem to be 711 bytes long. That is why the server is timing out waiting for the rest of the data.
HTTP/1.1 408 Request Timeout,
pay attention to server config, if your host server is nginx, you can check "client_body_timeout" in nginx.conf