I have some PHP code that calls a web service. It works perfectly when running on my local machine, but when I try to run it on the remote server (where it will ultimately reside), I get the following error:
Warning: simplexml_load_file(http://XXXXXXXXX:8080/symws/rest/standard/searchCatalog?clientID=StorageClient&term1=ocm00576702): failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request
in /var/www/ShoppingCart/storage_request_button.php on line 42
My local machine is running OSX; the server is running debian linux.
Any idea what could be causing this different behavior? Is there another package I need to install on the server?
UPDATE:
While putting the URL in a browser works fine, when I try to wget the URL from the linux server, I get the 400 error. The server URL is accessing is also debian linux. There's no firewall on the server. I've never had to configure that server to allow access to it from anywhere else.
You could try to supress errors as this question mentions to get rid of the status = 400. Perhaps with simplexml_load_file('url', null, LIBXML_NOERROR);
If it still doesnt work, there are a lot of things that can go wrong. simplexml_load_file doesnt have a lot of options to debug. You could try using curl.
<?php
// make request
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://XXXXXXXXX:8080/symws/rest/standard/searchCatalog?clientID=StorageClient&term1=ocm00576702");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
// handle error; error output
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) !== 200) {
var_dump($output);
} else {
//load xml
$xml= simplexml_load_string($output);
}
curl_close($ch);
if (empty($xml)) {
echo 'Something went wrong';
exit;
}
?>
UPDATE:
Give this a try please and see if you get some info back:
<?php
$content = file_get_contents('http://XXXXXXXXX:8080/symws/rest/standard/searchCatalog?clientID=StorageClient&term1=ocm00576702');
var_dump($http_response_header);
?>
My guess is that your URL is malformed.
Here's what the 400 code means.
"The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications."
Well, I don't think anyone was going to be able to help me with this one. Here's what it was: The server that was sending the request, storagews, was originally cloned from the server receiving the request, sws. Because of this, in storagews's etc/hosts file, the hostname "sws" was resolving to localhost. So it was trying to send the request to itself. It was raj_nt's comment that clued me in to this. If you want to make an answer, I'll give you the bounty.
Thanks for trying everyone!
checkout the request being sent by simplexml_load_file.
you could also try accessing the 'url' directly in the browser (assuming you are performing a GET request)
you also need to consider the server side. Your server might be throwing a response with a 400 code even though everything is working normal. It might also not programmed to accept the request you are sending.
I assume that file loads directly to Your browser. Check with CURL and spoof user-agent to one from Your browser. Try also to enable cookie feature.
here's some example: http://www.electrictoolbox.com/php-curl-user-agent/
Related
I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php
According to the description of the Google Custom Search API you can invoke it using the GET verb of the REST interface, like with the example:
GET https://www.googleapis.com/customsearch/v1?key=INSERT-YOUR-KEY&cx=017576662512468239146:omuauf_lfve&q=lectures
I setup my API key and custom search engine, and when pasted my test query directly on my browser it worked fine, and I got the JSON file displayed to me.
Then I tried to invoke the API from my PHP code by using:
$json = file_get_contents("$url") or die("failed");
Where $url was the same one that worked on the browser, but my PHP code was dying when trying to open it.
After that I tried with curl, and it worked. The code was this:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$body = curl_exec($ch);
Questions:
How come file_get_contents() didn't work and curl did?
Could I use fsocket for this as well?
Question 1:
At first you should check ini setting allow_url_fopen, AFAIK this is the only reason why file_get_contents() shouldn't work. Also deprecated safe_mode may cause this.
Oh, based on your comment, you have to add http:// to URL when using with file system functions, it's a wrapper that tells php that you need to use http request, without it function thinks you require to open ./google.com (the same as google.txt).
Question 2:
Yes, you can build almost any cURL request with sockets.
My personal opinion is that you should stick with cURL because:
timeout settings
handles all possible HTTP states
easy and detailed configuration (there is no need for detailed knowledge of HTTP headers)
file_get_contents probably will rewrite your request after getting the IP, obtaining the same thing as:
file_get_contents("xxx.yyy.www.zzz/app1",...)
Many servers will deny you access if you go through IP addressing in the request.
With cURL this problem doesn't exists. It resolves the hostname leaving the request as you set it, so the server is not rude in response.
This could be the "cause", too..
1) Why are you using the quotes when calling file_get_contents?
2) As it was mentioned in the comment, file_get_contents requires allow_url_fopen to be enabled on your php.ini.
3) You could use fsockopen, but you would have to handle HTTP requests/responses manually, which would be to reinvent the wheel when you have cURL. The same goes for socket_create.
4) Regarding the title of this question: cURL can be more customizable and useful to work with complex HTTP transactions than file_get_contents. Though, it should be mentioned, that working with stream contexts allows you to make a lot of settings for your file_get_contents calls. However, I think cURL is still more complete since it gives you, for instance, the possibility of working with multiple parallel handlers.
I have a php script, called from a web page (server is Apache debian 6.03), which does a GET and a POST both using curl. The GET is fine. The POST fails if php curl goes directly to the network but works fine if I use charles as a proxy. (Haven't tried other proxies.)
In particular, if I add
curl_setopt($ch, CURLOPT_PROXY, "localhost:8888" );
to my script (with charles runningon 8888) it succeeds. Otherwise I get:
"HTTP/1.1 400 Bad Request".
Any ideas greatly appreciated.
Oops. My script used cookies in the post and there was white space at the beginning of the cookie string which I constructed. Adding a 'trim' fixed the problem.
Sorry.
I'm trying to stream an ipcamera through PHP using the following code;
<?
# Stop apache timing out
set_time_limit(0);
# Set the correct header
header('Content-Type: multipart/x-mixed-replace;boundary=ipcamera');
# Read the images
readfile('http://[USER]:[PASSWORD]#[IPADDRESS]:[PORT]/videostream.cgi');
?>
This script works just fine on my localhost running apache and php, however on my web server (tested on 2 servers), I receive a 400 Bad Request error. I was previously receiving a 'connection refused' error, but this was resolved by my host forwarding the correct port.
Is 400 not something like incorrect syntax? Could this be because i have the "[USER]:[PASSWORD]#" in the url? If so, is there another way I can authenticate before running readfile?
I have run the following cases to determine the response code;
readfile('http://[USER]:[PASSWORD]#[IPADDRESS]:[PORT]/NONEXISTINGFILE.cgi');
// Returns 400 Bad Request (Should be 404)
readfile('http://[IPADDRESS]:[PORT]/NONEXISTINGFILE.cgi');
// Returns 404 Not Found (Correct Response)
readfile('http://[IPADDRESS]:[PORT]/videostream.cgi');
// Returns 401 Unauthorized (Correct Response)
readfile('http://[USER]:[PASSWORD]#[IPADDRESS]:[PORT]/videostream.cgi');
// Returns 400 Bad Request (Incorrect - Should be .. 200(?) OK)
readfile('http://[USER]:[PASSWORD]#[IPADDRESS]:[PORT]/FILETHATDOESEXIST.jpg');
// Returns 400 Bad Request (Should be 200(?) OK)
readfile('http://[IPADDRESS]:[PORT]/FILETHATDOESEXIST.jpg');
// Returns 200(?) OK (Correct Response)
If someone is able to give me the curl equivalent of this script, perhaps this is the solution. The Bounty still stands to anyone who can solve this for me :)
Regards,
For curl, try something like:
<?php
# Stop apache timing out
set_time_limit(0);
# Set the correct header
header('Content-Type: multipart/x-mixed-replace;boundary=ipcamera');
# Read the images
$ch = curl_init('http://[IPADDRESS]:[PORT]/videostream.cgi');
curl_setopt($ch, CURLOPT_USERPWD, '[USER]:[PASSWORD]');
curl_exec($ch);
If you want to continue to use readfile(), you could use stream_context_create() to construct a stream context for readfile. See this documentation on php.net to see how that could be done - specifically, you will want to create an HTTP stream context that passes an Authentication header.
simply check your [USER] and [PASSWORD] ... make sure it does not contains :, /, # ...
if there is escape it with \
You might have something set in your PHP.INI on the production server that is disabling readfile's ability to load external URLs.
I'm specifically thinking about allow_url_fopen which may be set to FALSE, though it may be something else.
You can also check you web server's error log for clues. PHP will usually emit a specific warning.
I have the following PHP code that uses cURL:
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,"http://area51.stackexchange.com/users/flair/31.json");
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
$a_data = json_decode(curl_exec($ch));
echo curl_error($ch);
I then get the following error when I try to access the page over HTTP:
Failed to connect to 0.0.0.31: Invalid argument
However, the code works fine when run from the command line.
What could possible cause cURL to try to connect to 0.0.0.31, which is AFAIK, not even a valid IP address?
"What could possible cause cURL to try to connect to 0.0.0.31, which is AFAIK, not even a valid IP address?"
Your DNS is botched. I tested your code and it works.
When you say "the code works fine when run from the command line," do you mean you have run it with the PHP CLI interpreter? If that's the case, you might check for any noticeable mismatches between the output of php -i vs. phpinfo() on the web server. It might be some strange version mismatch or environment problem, although your guess is as good as mine.
If you are (as I had originally thought) just talking about running the curl command, you could try checking version numbers or environment variables there too.
I had the same problem.
I think what you might have done was to construct the url like so:
$url = "http://area51.stackexchange.com/users/flair/" + 31 + ".json";
And for some reason the url then is translated to 0.0.0.31.
Instead try to concatinate the string, like so:
$url = "http://area51.stackexchange.com/users/flair/" . 31 . ".json";
Solved my issue at least!