I had a music database PHP-script that automatically gets album-covers from remote server via file_get_contents. For some time now, it doesn't work anymore. I tried to do same thing with curl and Gd Library, but same problem, it returns "403 - forbidden". I guess it´s any type of hotlink protection in remote server, I can open remote image URL in browser, but I can´t grab it to my server.
Is there any alternative to bypass this issue and grab remote image?
To spoof the user-agent and other references in a CURL request you can use this code:
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
This will probably bypass the hotlink protection, it bypasses my own ;-)
You can use ajax to determine which image you need and load that directly to browser.
It will not violate hot-linking. And should work fine.
Related
How would I force HTTP (Not HTTPS), while getting the source code of: http://www.youtube.com/watch?v=2YqEDdzf-nY?
I've tried using get_file_contents, but it goes to HTTPS.
There is no way, because google forces you to use https. It will not accept longer unsecure connection.
They even start to downrank websites, which are not on SSL.
As for your Comment, i have done a little bit more research.
Maybe it is depended on the user-agent. I have no time to confirm this.
Try CURL with this User Agent:
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101
Well, I have the following problem. I made a tool that checks the status of a website.
For example if I enter www.youtube.com it will say
http://www.youtube.com HTTP/1.0 200 OK
and for a website with a redirect it will say:
http://www.imgur.com HTTP/1.1 302 Moved Temporarily
http://imgur.com/ HTTP/1.1 200 OK
Alright, this works just as it should, however I would like to make it so that you can select the user-agent. So for example Android or something. Because youtube on android will redirect to m.youtube.com
I made a dropdownlist already with different user-agents and now what I don't know is how to change a user-agent. When I search google it just gives me browser plugins or addons.
I hope someone knows of a way to do this.
Thanks in advance!
You can send a CURL request and change the user agent like this.
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
I'm using Reddit's API to get votes count for a given URL (I'm doing that like this, http://www.reddit.com/api/info.json?url=$url). I'm always getting Error 500 message. I give you a snippet of my code below. Anyone can help me?
$useragent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1";
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,15);
curl_setopt($ch, CURLOPT_USERAGENT, $useragent);
$content = curl_exec($ch);
echo $content;
curl_close($ch);
Echo is always returning me the next line.
<html><body><h1>500 Server Error</h1>An internal server error occured.</body></html>
Thanks for reading.
--- EDITED ---
It is working locally.
$useragent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1";
reddit's API rules state the following about user agents:
Change your client's User-Agent string to something unique and descriptive, preferably referencing your reddit username.
Example: User-Agent: super happy flair bot v1.0 by /u/spladug
Many default User-Agents (like "Python/urllib" or "Java") are drastically limited to encourage unique and descriptive user-agent strings.
If you're making an application for others to use, please include a version number in the user agent. This allows us to block buggy versions without blocking all versions of your app.
NEVER lie about your user-agent. This includes spoofing popular browsers and spoofing other bots. We will ban liars with extreme prejudice.
That doesn't explain the 500 error, nevertheless, this is where I'd start if the same URL works just fine when you use your browser. If you are getting 500 errors when also using your browser, then you probably aren't using the info API correctly (and consequently found a bug).
I have PHP hosting with GoDaddy. Lately for last one hour, I am not able to load facebook content from my php scripts as it always says that
'You are using an Incompatible Web Browser'.
I know that it seems to be a browser issue, but i am sure that it is not because I have tried it with firefox+chrome+IE on two windows machine and I have tried with Firefox+safari browsers on a mac machine. Its getting the same error every time.
Could you please let me know what could be a possible reason for this?
[Try loading http://cabbie.apprumble.in/index.php?r=site/test]
In normal circumstances, This should load the facebook home page properly, instead of showing the error that You have incompatible browser.
[PS: I am loading the facebook page using php call file_get_contents("http://facebook.com") which was working perfectly fine until an hour back. Also, if I load the url from outside the browser, it works perfectly fine, but if its invoked from within the php using file_get_contents call, the said error appears.)]
Could someone please reply soon as I am stuck in my development due to this.
Thanks,
Kshitij
file_get_contents uses the user agent set in your php.ini file from the setting user_agent. You probably cannot change this as you are on godaddy hosting.
You will need to switch from file_get_contents to something that lets you control the user agent. You could use curl or sockets. Here is a curl example:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.facebook.com/");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13'); // set the user agent here
$data = curl_exec($ch);
echo $data; // this is the homepage
Facebook is attempting to block bots by not allowing certain user agents to request pages. You need to spoof the user agent to look like a normal browser.
some sites are blocking #file_get_contents and the curl code also. I need code(PHP) that circumvents that problem. I only need to get the page contents so I can extract the title.
You probably need to set the user agent string to emulate a "real" browser:
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; rv:2.0) Gecko/20110319 Firefox/4.0');