There is an empty file after updating - php

I'm trying to update an existing document with help of Google Docs API. I'm using PHP+Curl. The gdata.class.php is base class. Updating finishes partly. A old content is cleaned. But a new content is not created. According to documentation, I send PUT-request (initial request) to address
http://docs.google.com/feeds/upload/create-session/default/private/full/document%123456
There are Etag of the document and the empty request body. I recive the status code 200 Ok and the unique upload URI like this:
http://docs.google.com/feeds/upload/create-session/default/private/full/document%123456?upload_id=EnB2Uob7DWcFVJTX3oF8sdVv9koZTHacngmM_
What should I do???
I'm sending the file content and headers to the unique upload URI:
[0] => Content-Length: 6
[1] => Content-Range: bytes 0-5/6
[2] => Content-Type: text / plain
The responce is recived:
HTTP/1.1 308 Resume Incomplete
Headers not contains the "Range" header. It's strange.
Result: the target file is not changed.
I'm sending the file content and headers to the unique upload URI:
[0] => Content-Length: 6
[1] => Content-Type: text/plain
The responce is recived:
HTTP/1.1 200 OK
There is atom+xml in the body.
Result: the target file became an empty.
P.s. The curl_getinfo function returns "5". It's not depends on size of the file.

Well if it asks you to use PUT method, then tell CURL to use it..
curl_setopt($ch, CURLOPT_PUT, 1);

Related

Can I add a custom header to simplexml_load_file

I'd like to download a remote page only when it differs from a version I have already. There's no "Last-Modified" or "Expires" (the server sends Cache-Control: max-age=0, private, must-revalidate) but there's the ETag: field.
So, I can send If-None-Match: header with last ETag value and on any error (including 304 Not Modified) retry after a delay.
Currently I'm using simplexml_load_file to grab the URL, and I wonder if I can just call it in some way adding the extra header, or do I need to roll out more heavyweight solutions (curl, file_get_contents etc)?
You can use cURL with adding custom header, then use simplexml_load_string (with return content from cURL request) to get SimpleXMLElementobject.
curl_setopt($ch, CURLOPT_HTTPHEADER, array('If-None-Match:: XXX'));

php://input contains data for a GET request

I am running Apache2 and PHP 5 on Linux, and I'm getting some strange behavior with the php://input stream.
For some GET requests the stream is not empty like it should be. Instead, the php://input stream contains the entire GET request. I have worked around the issue but I would like to know if I should file a bug about this, or if it is "desired but undocumented" behavior.
Details
Early in the request processing, I call:
$in = file_get_contents('php://input');
if ( !empty($in) )
$post_data = json_decode($in);
if ( !empty($in) && is_null($post_data) ) {
// output some error info and exit
}
Usually when a request does not have a body then $in is empty and all is right with the world. But sometimes a GET request will have a body, and that body will be the entire request. Of course you can't json-decode that data, and the error condition gets hit.
This only happens with some requests. For example, this request does not exhibit the error:
GET /os/invitations/kkkkkk HTTP/1.1
Host: our.machine.com
Content-Type: application/json
Authorization: Basic aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa==
But this request, which is routed through some proxies and VPNs, does trigger the error.
GET http://some.proxy.at.some.big.company.com:7080/cvp-out/cmmproxy/os/invitations/d66065566dba541c8ba6a70329684645 HTTP/1.1
Content-Type: application/json
Authorization: Basic aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa==
Clientid: abc
User-Agent: Java/1.6.0
Host: some.proxy.at.some.big.company.com:7080
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
X-Remote-Addr: 53.231.244.171
X-Remote-Host: 53.231.244.171
X-Server-Name: some.proxy.at.some.big.company.com
X-Server-Port: 7080
X-Scheme: http
I spent hours treating this like a routing/dispatch problem, but it turned out to be our code. The fix was, of course, to only read from the input stream when you are expecting data:
if ( in_array( $_SERVER['REQUEST_METHOD'], array('PUT', 'POST') )) {
$in = file_get_contents('php://input');
if ( !empty($in) )
$post_data = json_decode($in);
}
Is this a known issue? Does it happen unpredictably? Should I file a bug?
As far as i know, that's not an error. We understand that a GET request shouldnt have a body, but in the docs of php:// they say nothing about wich types of requests will generate an input, so it could be any method. And for sure it is not limited to POST, since the mention at least PUT and PROPFIND.
So at any rate, your solution is a must.

Is there any way to configure php to always set $_SERVER['CONTENT_LENGTH']?

I'm working on carddav client. As server i use davical v. 0.9.9.6. I don't understand why i'm getting invalid content-type error when http headers contains correct value. I look into source code and found this condition:
if ( isset($_SERVER['CONTENT_LENGTH']) && $_SERVER['CONTENT_LENGTH'] > 7) {...
After little research I found php set $_SERVER['CONTENT_LENGTH'] only with POST method and uploading file. Is there any way to configure php to always set $_SERVER['CONTENT_LENGTH']?
I'm asking generally, not only for this case...
//EDIT
I'm doing HTTP PUT request to davical server (using php curl).
PUT /caldav.php/testuser/contacts/newc.vcf HTTP/1.1
Host: davical
Content-Type: text/vcard;
BEGIN:VCARD
VERSION:3.0
FN:ME
...
On davical side is condition testing CONTENT_LENGTH which is not set. So it's a davical bug?
//EDIT 2
Finally I figure it out!
PUT request with calback readfunc requires set INFILE_SIZE via curl_setopt(...)
There is none auto value and put Content-Length field manualy into header is also wrong.
Example (incorrect):
// PUT REQUEST
curl_setopt($ch,CURLOPT_HTTPHEADER,"Content-Length: $length"); //mistake
curl_setopt($ch,CURLOPT_PUT,true);
curl_setopt($ch,CURLOPT_READFUNCTION,array($this,'readfunc'));
....
--------------------------------------------------------------
// WIRESHARK TCP STREAM DUMP
PUT /caldav.php/testuser/contacts/novy.vcf HTTP/1.1
Authorization: Basic xxxxxxxxxxxxxxx
Host: davical
Accept: */*
Content-Type: text/vcard
Content-Length: xxx
Expect: 100-continue
HTTP/1.1 100 Continue
155
BEGIN:VCARD
VERSION:3.0
...
END:VCARD
0
HTTP/1.1 200 OK
----------------------------------------------------------------
// On server side
isset($_SERVER['CONTENT_LENGTH'])==false
Second (correct) example
// PUT REQUEST
curl_setopt($ch,CURLOPT_INFILESIZE,$length);
curl_setopt($ch,CURLOPT_PUT,true);
curl_setopt($ch,CURLOPT_READFUNCTION,array($this,'readfunc'));
....
--------------------------------------------------------------
// WIRESHARK TCP STREAM DUMP
PUT /caldav.php/testuser/contacts/novy.vcf HTTP/1.1
Authorization: Basic xxxxxxxxxxxxxxx
Host: davical
Accept: */*
Content-Type: text/vcard
Content-Length: xxx
Expect: 100-continue
HTTP/1.1 100 Continue
BEGIN:VCARD
VERSION:3.0
...
END:VCARD
HTTP/1.1 200 OK
----------------------------------------------------------------
// On server side
isset($_SERVER['CONTENT_LENGTH'])==true
Although i have never used CONTENT_LENGHT i can tell you why this is probably happening:
In a request, you don't have to set the Content-Lenght header... IT IS NOT MANDATORY. Except for specific situations. If your POSTed content is of type "multipart/form-data" it becomes necessary to use content-lenght for each part because each part is seperated by a boundary and each part will have its own headers...
For example:
Content-Type: MultiPart/Form-Data
Boundary: #FGJ4823024562DGGRT3455
MyData=1&Username=Blabla&Password=Blue
#FGJ4823024562DGGRT3455==
Content-Type: image/jpef:base64
Content-Lenght: 256
HNSIFRTGNOHVDFNSIAH$5346twSADVni56hntgsIGHFNR$Iasdf==
So here this is a crude example of what a multi part request works, you see that the second part has a content-lenght. This is why sometimes the content-lenght is set and sometimes not, because you need to read X bytes before finding another boundary and extract the correct data.
It doesn't mean your server will never send it in in other cases, but my 2 cents are this is the case right now. Its because you are not in POST, but in some other modes.
Only requests that have a request body have a content length request header (or at least only then it makes sense) and so therefore the $_SERVER variable is set.
If you need it to be always set (which I think is bogus), you can do this yourself on the very beginning of your script:
isset($_SERVER['CONTENT_LENGTH']) && $_SERVER['CONTENT_LENGTH'] = 0;
Assuming that if it is not set, it's of zero length. See as well Improved handling of HTTP requests in PHP.
You could probably set them by yourself. Why do you need that this values are set? And what should they set to?
Maybe you're missing information on $_SERVER['CONTENT_TYPE'] or
$_SERVER['CONTENT_LENGTH'] as I did. On POST-requests these are
available in addition to those listed above.
-> http://www.php.net/manual/en/reserved.variables.server.php#86495

CURL response different than response to request sent from browser

Attempting to submit a form with CURL, both via PHP and the command line. The response from the server consists of null content (the headers posted below).
When the same URL is submitted via a browser, the response consists of a proper webapge.
Have tried submitting the CURL request parameters via POST and GET via each of the following command line curl flags "-d" "-F" and "-G".
If the query string parameters are posted with "-d" flag, resulting header is:
HTTP/1.1 302 Moved Temporarily
Date: Thu, 02 Jun 2011 21:41:54 GMT
Server: Apache
Set-Cookie: JSESSIONID=DC5F435A96A353289F58593D54B89570; Path=/XXXXXXX
P3P: CP="CAO PSA OUR"
Location: http://www.XXXXXXXX.com/
Content-Length: 0
Connection: close
Content-Type: text/html;charset=UTF-8
Set-Cookie: XXXXXXXXXXXXXXXX=1318103232.20480.0000; path=/
If the query string parameters are posted with "-F" flag, the resulting header is:
HTTP/1.1 100 Continue
HTTP/1.1 500 Internal Server Error
Date: Thu, 02 Jun 2011 21:52:54 GMT
Server: Apache
Content-Length: 1677
Connection: close
Content-Type: text/html;charset=utf-8
Set-Cookie: XXXXXXXXXXXXXX=1318103232.20480.0000; path=/
Vary: Accept-Encoding
<html><head><title>Apache Tomcat/5.5.26 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 500 - </h1><HR size="1" noshade="noshade"><p><b>type</b> Exception report</p><p><b>message</b> <u></u></p><p><b>description</b> <u>The server encountered an internal error () that prevented it from fulfilling this request.</u></p><p><b>exception</b> <pre>javax.servlet.ServletException: Servlet execution threw an exception<br>
</pre></p><p><b>root cause</b> <pre>java.lang.NoClassDefFoundError: com/oreilly/servlet/multipart/MultipartParser<br>
com.corsis.tuesday.servlet.mp.MPRequest.<init>(MPRequest.java:27)<br>
com.corsis.tuesday.servlet.mp.MPRequest.<init>(MPRequest.java:21)<br>
com.corsis.tuesday.servlet.TuesdayServlet.doPost(TuesdayServlet.java:494)<br>
javax.servlet.http.HttpServlet.service(HttpServlet.java:710)<br>
javax.servlet.http.HttpServlet.service(HttpServlet.java:803)<br>
</pre></p><p><b>note</b> <u>The full stack trace of the root cause is available in the Apache Tomcat/5.5.26 logs.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/5.5.26</h3></body></html>
Questions:
What might cause a server to respond different depending on the nature of the CURL request.
How to successfully submit request via CURL?
HTTP/1.1 100 Continue
I had problems associated with this header before. Some servers simply do not understand it. Try this option to override Expect header.
curl_setopt( $curl_handle, CURLOPT_HTTPHEADER, array( 'Expect:' ) );
To add to what Richard said, I have seen cases where servers check the User-Agent string and behave differently based on its value.
I have just had an experience with this and what fixed it was surprising. In my situation I was logging into a server so I could upload a file, have the server do work on it, and then download the new file. I did this in Chrome first and used the dev tools to capture over 100 HTTP requests in this simple transaction. Most are simply grabbing resources I don't need if I am trying to do all of this from the command line, so I filtered out only the ones I knew at a minimum I should need.
Initially this boiled down to a GET to set the cookie and log in with a username and password, a POST to upload the file, a POST to execute the work on the file, and a GET to retrieve the new file. I could not get the first POST to actually work though. The response from that POST is supposed to be information containing the upload ID, time uploaded, etc, but instead I was getting empty JSON lists even though the status was 200 OK.
I used CURL to spoof the requests from the browser exactly (copying the User-Agent, overriding Expect, etc) and was still getting nothing. Then I started arbitrarily adding in some of the requests that I captured from Chrome between the first GET and POST, and low and behold after adding in a GET request for the JSON history before the POST the POST actually returned what it was supposed to.
TL;DR Some websites require more requests after the initial log in before you can POST. I would try to capture a successful exchange between the server and browser and look at all of the requests. Some requests might not be as superfluous as the seem.

PHP HTTP POST fails when cURL data > 1024

Note: solution at the end
If I attempt to do a HTTP POST of over 1024 characters, it fails. Why? Here is a minimal example:
recipient.php:
<?php
if (strlen(file_get_contents('php://input')) > 1000
|| strlen($HTTP_RAW_POST_DATA) > 1000) {
echo "This was a triumph.";
}
?>
sender.php:
<?php
function try_to_post($char_count) {
$url = 'http://gpx3quaa.joyent.us/test/recipient.php';
$post_data = str_repeat('x', $char_count);
$c = curl_init();
curl_setopt_array($c,
array( CURLOPT_URL => $url,
CURLOPT_HEADER => false,
CURLOPT_CONNECTTIMEOUT => 999,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => 1,
CURLOPT_POSTFIELDS => $post_data
)
);
$result = curl_exec($c);
echo "{$result}\n";
curl_close($c);
}
for ($i=1020;$i<1030;$i++) {
echo "Trying {$i} - ";
try_to_post($i);
}
?>
output:
Trying 1020 - This was a triumph.
Trying 1021 - This was a triumph.
Trying 1022 - This was a triumph.
Trying 1023 - This was a triumph.
Trying 1024 - This was a triumph.
Trying 1025 -
Trying 1026 -
Trying 1027 -
Trying 1028 -
Trying 1029 -
configuration:
PHP Version 5.2.6
libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3 libidn/1.8
lighttpd-1.4.19
Solution
Add the following option for cURL:
curl_setopt($ch,CURLOPT_HTTPHEADER,array("Expect:"));
The reason seems to be that any POST over 1024 character causes the "Expect: 100-continue" HTTP header to be sent, and Lighttpd 1.4.* does not support it. I found a ticket for it: http://redmine.lighttpd.net/issues/show/1017
They say it works in 1.5.
You can convince PHP's curl backend to stop doing the 100-continue-thing by setting an explicit request header:
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Expect:'));
This way you can post a request however long you would ever want and curl will not do the dual phase post.
I've blogged about this nearly two years ago.
First thoughts...
The manual page for curl_setopt says of CURLOPT_POSTFIELDS
"The full data to post in a HTTP "POST"
operation. To post a file, prepend a
filename with # and use the full path.
This can either be passed as a
urlencoded string like
'para1=val1&para2=val2&...' or as an
array with the field name as key and
field data as value."
Could it be that your value is being treated as if it were urlencoded, and thus looks like a big long name with no value. Something somewhere is deciding to truncate that name.
Maybe you could alter it to something like
$post_data = "data=".str_repeat('x', $char_count);
Turns out this was too easy, and the problem was a little deeper. So, how to debug?
Find out exactly what CURL sends to the server
Another debugging tactic might be to formulate a curl command line which achieves the same thing, and have it output the HTTP request details as it makes them.
Testing the server by hand
You can eliminate the server from the equation by perform a request by hand, e.g. telnetting to port 80 on your server and sending it a request >1024 chars
POST /test/recipient.php HTTP/1.0
Host: gpx3quaa.joyent.us
Content-Length:1028
xxxxx(I put 1028 chars here, no point copying them all here!)
I got this response
HTTP/1.0 200 OK
Connection: close
Content-type: text/html; charset=UTF-8
Content-Length: 19
Date: Tue, 20 Jan 2009 21:35:16 GMT
Server: lighttpd/1.4.19
This was a triumph.Connection closed by foreign host.
So at least you now know it's all on the client side, possible some CURL option or configuration setting somewhere :(
Final Answer!
The problem intrigued me so I dug deeper
If you use CURLOPT_VERBOSE=>true, you'll see that CURL sends an extra header on the bigger posts:Expect: 100-Continue. Your lighttpd server doesn't like this, it would seem.
You can stop CURL from doing this by forcing it to use HTTP/1.0 with CURLOPT_HTTP_VERSION=>CURL_HTTP_VERSION_1_0 in your curl_setopt options array.
I had a similar problem with a IIS server, using SSL v3.
I kept getting the following cURL error when CURLOPT_POSTFIELDS was longer than 1024 :
52 - SSL read: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number, errno 0
Adding CURLOPT_HTTPHEADER : "Expect:" solved the problem for me.
Thank you so much for this thread!
Check if you have Suhosin patch enabled. By default, it cuts off POST data after certain number of indexes. You can bypass it in Suhosin config tho.

Categories