With permission from the other site, I am supposed to periodically download an image hosted on another site and include it in a collection of webcam photos on our site with an external link to the contributing site.
This hasn't been any issue for any of the other sites, but with this particular one, I can't open the image to resize it and save it to our server. It's not a hotlinking issue because I can create a plain ole' <img src="http://THEIRIMAGE /> on a page on our site and it works fine.
I've tried using $img = new Imagick($sourceFilePath) directly as with all the others, as well as trying to use PHP's copy and also trying to copy the image using cURL but when doing so, the page just times out with no results at all.
Here's the image in question: http://island-alpaca.selfip.com:10202/SnapshotJPEG?Resolution=640x480&Quality=Standard
Like I've said, I'm able to do this sort of thing with several other webcams, but it isn't working with this one, and I am stuck as to why it isn't. Any help would be greatly appreciated.
Thank you.
In order to reduce bandwidth and server load some sites block certain bots from accessing their content. Your cURL request needs to more closely mimic an actual browser, which would include a referrer (usually), user agent, etc. It could also be that there is a redirect and you haven't told cURL to follow redirects.
Try setting more opts like this:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_ENCODING , "gzip");
curl_setopt($ch, CURLOPT_REFERER, $url);
curl_setopt($ch, CURLOPT_USERAGENT, 'PHP');
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
If that doesn't work, get the user agent from an actual browser and put it in there and see if that makes a difference.
Try using file_get_contents().
for example:
$url = 'http://island-alpaca.selfip.com:10202/SnapshotJPEG?Resolution=640x480&Quality=Standard';
$outputfile = "tmp/image" . date("Y-m-d_H.i.s");
$cmd = "wget -q \"$url\" -O $outputfile";
exec($cmd);
$temp_img = file_get_contents($outputfile);
$img = new Imagick($temp_img);
Can you try this and get back to me?
Related
Currently I have page say page1.php. Now in a certain event, I just want to run another link say http://example.com without actually refreshing this page. The link is a kind of script which updates my database. I tried using shell_exec('php '.$url); where $url='http://example.com' however it showed me an error that could not open file so I suppose shell_exec works only for internal files present on the server. Is there a way to do this directly or I have to go with AJAX? Thanks in advance
Try using curl to send the request to the server with php.
$url = 'http://example.com';
$ch = curl_init();
curl_setopt($ch, CURLOPT_AUTOREFERER, TRUE);
curl_setopt($ch, CURLOPT_NOBODY, TRUE);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_exec($ch);
curl_close($ch);
Alternatively you could try file_get_contents
file_get_contents('http://example.com');
I would do this front-end and I would use JSONP: much clean and safer IMHO.
How does one download a file from a web page without a direct path to the file. For example a URL with GET information instead of the path. The code below seems to be downloading the actual page html instead of the file...
Not sure what I'm doing wrong. I also would like to augment this to also perform on sites that require logins but I think I would just have to add
curl_setopt($ch, CURLOPT_USERPWD, "$username:$password")
to the code?
$output_filename = "advanced.exe";
$host = "http://download.cnet.com/Advanced-SystemCare-Free/3001-2086_4-10407614.html?hlndr=1";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $host);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_AUTOREFERER, false);
curl_setopt($ch, CURLOPT_REFERER, "http://download.cnet.com");
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($ch, CURLOPT_HEADER, 0);
$result = curl_exec($ch);
curl_close($ch);
$fp = fopen($output_filename, 'w');
fwrite($fp, $result);
fclose($fp);
The link you have there isn't the actual link to the file, only the page that initiates the download. By the looks of it, the page uses JavaScript to trigger the download, so you would want to dig through their code to find out exactly how they do it. Then you can find the real URL to the file.
A simple way, if you are doing this only for one file, would be to download the file in your browser, and then access the URL it used from the browser's download manager. (In Firefox, for example, right click the file and choose "Copy Download Link")
I also would like to augment this to also perform on sites that require logins but I think I would just have to add ...
That would work only for HTTP based authentication. If the site uses a traditional login form, this will not work. You'd have to submit several, sequential HTTP requests via CURL, using cookies to store the session state.
I'm working on an android application that interacts with a forum I visit. The staff of the forum allows this app, but won't give an API to work with.
In order to get the information I need I use an intermediate PHP script that scrapes the forum with CURL. Everything works just great, exept for one small detail.
To view topics I scrape all the data I need such as poster name, date and post content. But since the images that are stored on their server are hotlink protected, I am unable to see them. The funny thing is that viewing individual images is no problem, but whenever they are placed in a context, they are replaced by the sites copyright image.
I have the feeling that the website changes the HTTP referer that I send (which is empty), and hence respond with a copyright image (hotlink protection).
Can someone give me some tips how to solve this problem?
The code I use:
$url = 'someurliwanttoscrape';
$cookie_string = 'somecookies';
$useragent = 'someuseragent';
$timeout = 60;
$rawhtml = curl_init();
curl_setopt ($rawhtml, CURLOPT_URL,$url);
curl_setopt ($rawhtml, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($rawhtml, CURLOPT_REFERER, '');
curl_setopt ($rawhtml, CURLOPT_COOKIE, $cookie_string);
curl_setopt ($rawhtml, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt ($rawhtml, CURLOPT_USERAGENT, $userAgent);
$output = curl_exec($rawhtml);
curl_close($rawhtml);
This works whenever I put the url of the image in there. No problem, I can see the image, no hotlink protection. But as soon as I put the URL where the image is embedded in the text, the hotlink protection kicks in.
You can use curl_setopt to tell cURL what referrer to send:
curl_setopt($ch, CURLOPT_REFERER, 'http://www.google.com');
See the documentation for more details, but that's pretty much all there is to it
I have been searching for a way to proxy a mjpeg stream from the AXIS M1114 Network Camera.
using the following url setup
http://host:port/axis-cgi/mjpg/video.cgi?resolution=320x240&camera=1
i try to capture the output and make them available to users with a php script running an apache server on ubuntu.
having browsed the web looking for an answer to no avail i come to you.
my ultimate goal is to have users able to link to the proxy like this:
<img src='proxy.php'>
and have the details of all the things in proxy.php.
I have tried using the way of cURL (advised in similar thread here) but i can't get it to work, probably due to lack of knowledge on the inner workings.
currently my very simple proxy.php looks like this
<?php
$camurl = "http://ip:port";
$campath = "axis-cgi/mjpg/video.cgi";
$userpass = "user:pw";
$ch = curl_init();
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_URL, $camurl + $campath);
curl_setopt($ch, CURLOPT_POSTFIELDS, 'resolution=320x240&camera=1');
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
curl_setopt($ch, CURLOPT_USERPWD, $userpass);
$result = curl_exec($ch);
header('Content-type: image/jpeg');
echo $result;
curl_close($ch);
?>
My understanding is that this would produce an acceptable output for my plan. But alas.
My question would be if there is a blatant error i do not see. Any simpler option/way of getting the result i aim for is welcome too.
Please point me in the right direction. I happily provide any relevant information i might have missed to provide. Thank you in advance.
solved edit:
After commenting out:
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, true);
changing
curl_setopt($ch, CURLOPT_URL, $camurl + $campath);
to
curl_setopt($ch, CURLOPT_URL, $camurl . $campath); (mixing up some languages)
and most importantly removing a space in the .php file so that the header is actually the header it sort of does what i wanted.
Adding a
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
seems to be needed to get the image displayed as image and not as raw data.
I'm currently using this plugin http://wordpress.org/extend/plugins/repress/ which basically makes my website a proxy so that users can access censored websites like this
www.mywebsite.com/proxy/www.cnn.com
The plugin works well enough but when it comes to absolute links the plugin doesn't parse it properly and the link is still blocked. That plugin development has stopped. So I need to write my own script. I've been searching everywhere and reading up on the tutorials I can find but none specifically helps me in regards to this.
I know how to use php curl to fetch a website and echo it on a blank page. What I don't know is how to set a proxy script to work like the above example where users can type
www.mywebsite.com
followed by
/proxy.php
then their target website
/www.cnn.com
Currently I have this set up:
<?php
$url = 'http://www.cnn.com';
$proxy_port = 80;
$proxy = '92.105.140.115';
$timeout = 0;
$referer = 'http://www.mydomain.com'
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
curl_setopt($ch, CURLOPT_PROXYPORT, $proxy_port);
curl_setopt($ch, CURLOPT_PROXY, $proxy);
curl_setopt($ch, CURLOPT_PROXYTYPE, 'HTTP');
curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 0);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);*/
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_REFERER, $referer);
$data = curl_exec($ch);
curl_close($ch);
echo $data;
?>
This pulls the home page but no css or images are retrieved. Likewise all relative links are broken. I have no idea how to apply the proxy_port and proxy variables. I tried
92.105.140.115:80/www.cnn.com
but that doesn't work. I don't quite fully understand this code either since I found it on an example site.
Any answer or links for tutorials is greatly welcome.
Thank You!
To have a completely functioning proxy isn't that simple. There are many such projects already available. Give any a shot:
http://www.surrogafier.info/
https://github.com/Alexxz/Simple-php-proxy-script
http://www.glype.com/
Have fun!
you can not just echo in a page the result of a curl's fetch website because the browsers will interpret the Uris bad, you need that when the user click on a link he goes to your proxy site not to the original site, so you can't just print with echo, you need to edit manual every link in that fetched page before print it to the users, i have a full functional proxy made by php en p.listascuba.com you cant try it.
contact me for more info