I am in process of providing an option similar to facebook share , wherein an external webpage contents ( content from the external page) can be displayed in my site. I am coding this using PHP and Ajax. but when i hosted my page on a free server site like www/0009.ws i get an error as below,
Warning: file_get_contents() [function.file-get-contents]: URL file-access is disabled in the server configuration in /www/0009.ws
I would surely be moving this to a paid server later,
Is there a workaround if a paid service provider also does not allow me to use these options?
Do I have to setup my own server?
You can use the curl set of functions:
<?php
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://www.example.com/");
curl_setopt($ch, CURLOPT_HEADER, 0);
// grab URL and pass it to the browser
curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
?>
assuming support is built in, which is a bit much. Honestly file_get_contents not having URL file access is something you should check before hand when choosing a provider.
Related
I am making cURL calls from php scripts on one domain (mac2cash.com) to another (thebookyard.com), both hosted on the same Apache server and the same IP address. This has been working fine but I need to add some new functionality to the site and I have just created a new php script at the root level of the same target domain as the cURL call that is working, but when I call this new script using the same code I used on the working script, this is returning the message "Found: the document has moved here".
The target scripts for the working and failing cURL calls are at the root level of the same domain. I have checked they have the same unix permissions. But if I simply change the php file name in the working script to the name of the target script in the failing call, this now fails too with the same 302 redirect message.
I even duplicated the 'working' target script (byasd_api.php) on the target domain to a new file (byasd_api_copy.php) and I get the 302 message if I make a cURL call to that from the calling script that was working even though the code is exactly the same!
I cannot see what the difference is between the two files. Is there some kind of cacheing going on where newly created files are not being treated the same?
For reference, here is the calling code:
$header=array("Host:thebookyard.com");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, HTTP_SERVER_IP."/byasd_api.php");
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_HTTPHEADER, $header);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_REFERER, 'http://www.mac2cash.com');
curl_setopt($ch, CURLOPT_POST,3);
curl_setopt($ch, CURLOPT_POSTFIELDS,$post_data);
$output = curl_exec($ch);
curl_close($ch);
The 'byasd_api.php' script name is the only thing I am changing.
I've spent some hours googling for a solution so would appreciate any suggestions.
Your apache have configurated to search favicon.ico in each call the 302 is because not find the ico
GET http://theboo....com/favicon.ico [HTTP/1.1 302 Found 151ms]
Change the cofiguration or add the favicon.ico file.
Maybe the configuration try to find the ico file only in the root
It turns out the reason for the difference in behaviour was that the working script name was included as a rewrite condition in the htaccess file which was redirecting http to https. Changing the CURL url to "https://".HTTP_SERVER_IP."/byasd_api.php" stopped the "Found: the document has moved here" error but the call was then failing because CURL was trying to validate the SSL certificate for the IP address rather than the domain.
The solution to that was to add the following :
curl_setopt($ch, CURLOPT_RESOLVE, array("www.thebookyard.com:443:".HTTP_SERVER_IP,));
This still allows the call to be to the IP address (which is much faster than via the domain name) but CURL validates the SSL certificate against the domain name.
I've tried to retrieve the image data of my Facebook profile picture, using both file_get_contents and curl.
The problem occurs on my Google compute engine instance, while on any other server (localhost - mamp, AWS) the script works fine.
An example for one of the scripts I was using
<?php
var_dump(
json_decode(
file_get_contents("https://graph.facebook.com/_FACEBOOK_ID_/picture?width=720&height=720")
)
);
Please keep in mind that I've tried using the parameter redirect=false, and accessing the image url I've got in my json response returned false as-well.
Also, I've tried using wget in SSH to the image's url, which returned (403) Forbidden.
My assumption is that I need to configure something differently in my server, not PHP, but because I'm able to retrieve any other image, with the same script - I'm not sure what.
I've already experienced this exact problem,
Ignoring SSL verification while using cURL did the trick for me.
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); // Ignore SSL verification
curl_setopt($ch, CURLOPT_URL, "https://graph.facebook.com/_FACEBOOK_ID_/picture?width=720&height=720");
$data = curl_exec($ch);
curl_close($ch);
var_dump($data);
How can I send a custom HTTP Request to a server whose URL is "http://[IP]:[Port]/"?
What I mean is that, instead of the first line being a normal GET or POST like so:
GET /index.html HTTP/1.1
Host: www.example.com
How can this be replaced with something just like:
CUSTOM
Host: [IP]
I don't mind having to use any additional libraries if necessary such as cURL.
UPDATE:
I've tried using the following cURL code:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://[IP]:[Port]/");
curl_setopt($ch, CURLOPT_PORT, [Port]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
//curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "CUSTOM");
$output = curl_exec($ch);
curl_close($ch);
print($output);
But it just keeps loading for 2 minutes until it said internal error (both with and without using the CURLOPT_CUSTOMREQUEST). However, if I use a standard website such as http://google.com it'll work fine.
Also I forgot to mention, the port my server is using is 7899, is this ok? Open it in my web browser fine, but cURL doesn't seem to be able to.
Looks like there's nothing wrong with your code. If you're using a shared hosting provider, ask them to open up outbound TCP port 7899.
You can create a custom HTTP request method using the http_request_method_register function:
http://www.php.net/manual/en/function.http-request-method-register.php
This code needs to be run on the server that will be handling the request. If you try to use a non-standard HTTP request method on any old server, depending on the server it may ignore it, or may return an error code.
I was using cURL to scrape content from a site and just recently my page stated hanging when it reached curl_exec($ch). After some tests I noticed that it could load any other page from my own domain but when attempting to load from anything external I'll get a connect() timeout! error.
Here's a simplified version of what I was using:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,'http://www.google.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
$contents = curl_exec ($ch);
curl_close ($ch);
echo $contents;
?>
Here's some info I have about my host from my phpinfo():
PHP Version 5.3.1
cURL support enabled
cURL Information 7.19.7
Host i686-pc-linux-gnu
I don't have access to SSH or modifying the php.ini file (however I can read it). But is there a way to tell if something was recently set to block cURL access to external domains? Or is there something else I might have missed?
Thanks,
Dave
I'm not aware about any setting like that, it would not make much sense.
As you said you are on a remote webserver without console access I guess that your activity has been detected by the host or more likely it caused issues and so they firewalled you.
A silent iptables DROP would cause this.
When scraping google you need to use proxies for more than a few hand full of requests and you should never abuse your webservers primary IP if it's not your own. That's likely a breach of their TOS and could even result in legal action if they get banned from Google (which can happen).
Take a look at Google rank checker that's a PHP script that does exactly what you want using CURL and proper IP management.
I can't think of anything that's causing a timeout than a firewall on your side.
I'm not sure why you're getting a connect() timeout! error, but the following line:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
If it's not set to 1, it will not return any of the page's content back into your $contents.
Hello
I am working with a legacy system where an ASP.NET application posts an XML file to a server via curl.exe (this url to send is configurable by a .config file).
Now due to legacy system limitations, I need curl post this XML to my ubuntu server by changing the said .congfig file, modify the received XML as I need and finally curl post it to the real server.
How can this be done ? My guess is a php or a python script running under apache2 server, listening posts. Once received the xml file, do the required modifications on the file and post to the real curl server.
Via php or python, how can this be done ?
Since ASP.NET application is posting XML, you simply need to handle a normal POST request, modify XML to match your requirement and post it using cURL to the real cURL server. In PHP, it would look something like this (more or less meta code, error checking and additional logic is needed):
$xml = $_POST['xml'];
// do something with posted XML
.....
// post it to the "real" cURL server
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('xml' => $xml));
$result = curl_exec($ch);
curl_close($ch);
That's about it, check cURL documentation and use what is necessary for POST to work with your server, and your are all good.