Images from an remote server sometimes won't show on my site. They are broken. And getting 403 (Forbidden) when checking in console.
So I wanna find out a way to pre-check if they are forbidden and then not show them, instead of a broken image.
So, my thought was to check http status with get_headers(). But it shows 200 OK even for the broken images.
And then I found this solution after a google search, to use before get_headers():
$default_opts = array(
'http'=>array(
'method'=>"GET",
'header'=>"Referer: http://www.fakesite.com/hotlink-check/",
)
);
stream_context_set_default($default_opts);
But that doesn't work either. I have also tried file_get_contents() but that gave result for both non-broken and broken images.
So, how to check if a remote file has an 403 error?
EDIT:
Okay so, here are recent tests I've tried with:
$image = 'http://www.arabnews.com/sites/default/files/2018/02/28/1114386-582723106.jpg';
Test 1:
<?php
$default_opts = array(
'http' => array(
'method' => "GET",
'header' => "Referer: http://www.fakesite.com/hotlink-check/",
'user_agent' => 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6',
)
);
stream_context_set_default($default_opts);
$output = #get_headers($image, 1);
echo '<pre>'.print_r($output, true).'</pre>';
?>
Output:
Array
(
[0] => HTTP/1.1 200 OK
[Date] => Wed, 28 Feb 2018 14:50:39 GMT
[Content-Type] => image/jpeg
[Content-Length] => 43896
[Connection] => close
[Set-Cookie] => __cfduid=d2a28d117e8a32d5755219bcd6e1892561519829439; expires=Thu, 28-Feb-19 14:50:39 GMT; path=/; domain=.arabnews.com; HttpOnly
[X-Content-Type-Options] => nosniff
[Last-Modified] => Tue, 27 Feb 2018 22:55:30 GMT
[Cache-Control] => public, max-age=1209600
[Expires] => Wed, 14 Mar 2018 14:50:39 GMT
[X-Request-ID] => v-e065c0be-1c11-11e8-9caa-22000a6508a2
[X-AH-Environment] => prod
[X-Varnish] => 175579289 178981578
[Via] => 1.1 varnish-v4
[X-Cache] => HIT
[X-Cache-Hits] => 8
[CF-Cache-Status] => HIT
[Accept-Ranges] => bytes
[Server] => cloudflare
[CF-RAY] => 3f44328fd0247179-ORD
)
Test 2:
<?php
function get_response_code($url) {
#file_get_contents($url);
list($version, $status, $text) = explode(' ', $http_response_header[0], 3);
return $status;
}
$output = get_response_code($image);
echo '<pre>'.print_r($output, true).'</pre>';
?>
Output:
200
Test 3:
<?php
$output = getimagesize($image);
echo '<pre>'.print_r($output, true).'</pre>';
?>
Output:
Array
(
[0] => 650
[1] => 395
[2] => 2
[3] => width="650" height="395"
[bits] => 8
[channels] => 3
[mime] => image/jpeg
)
Check the image size. This is the easiest and most effective solution:
list($width, $height, $type, $attr) = getimagesize("www.url.com/img.jpg");
If especially the first two; $width and $height are set, and $type is a valid value you are good to go.
And I would display them in this case.
Related
When I use the function get_headers($url) where $url = "https://www.example.com/product.php?id=15" on my live site then it is not returning any array from given url. I get nothing. But when the same code is used on my localhost, I get following:
Array
(
[0] => HTTP/1.1 200 OK
[1] => Cache-Control: private
[2] => Content-Type: text/html; charset=utf-8
[3] => Server: Microsoft-IIS/8.5
[4] => Set-Cookie: ASP.NET_SessionId=wumg0dyscw3c4pmaliwehwew; path=/; HttpOnly
[5] => X-AspNetMvc-Version: 4.0
[6] => X-AspNet-Version: 4.0.30319
[7] => X-Powered-By: ASP.NET
[8] => Date: Fri, 18 Aug 2017 13:06:18 GMT
[9] => Connection: close
[10] => Content-Length: 73867
)
So, why the function is not working successfully on live?
EDIT
<?php
if(isset($_POST['prdurl']))
{
$url = $_POST['prdurl'];
print_r(get_headers($url)); // not getting any array on live but on localhost
if(is_array(#get_headers($url)))
{
// some code goes here...
}
else
{
echo "URL doesn't exist!"
}
}
?>
One more thing to note down here is that I'm using file_get_html to retrieve the html page from the remote url. It's working on my localhost but not on live as well.
Hopefully someone can help.. I'm using jQuery dropzone.js to upload the video and I can upload videos fine, but I can't "complete" the process so the videos always remain in a processing/uploading state. I'm performing the correct procedures according to the Vimeo API docs. Here are some headers/responses if they help, I've replaced some values with xxxx:
Upload request headers:
PUT /upload?ticket_id=xxxx&video_file_id=514311645&signature=acd2a6c4ba8c147651604793b081e053&v6=1 HTTP/1.1
Host: 1511923755.cloud.vimeo.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0 FirePHP/0.7.4
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Content-Type: video/mp4
Referer: http://local.xxxx.co.uk/vimeo
Content-Length: 29158540
Origin: http://local.xxxx.co.uk
x-insight: activate
Connection: keep-alive
Upload Response headers:
HTTP/1.1 200 OK
Server: Vimeo/1.0
Content-Type: text/plain
Access-Control-Allow-Origin: *
Timing-Allow-Origin: *
Access-Control-Expose-Headers: Range
Access-Control-Allow-Headers: Content-Type, Content-Range, X-Requested-With
X-Requested-With: XMLHttpRequest
Access-Control-Allow-Methods: POST, PUT, GET, OPTIONS
Content-Length: 0
Connection: close
Date: Thu, 14 Apr 2016 08:05:19 GMT
X-Backend-Server: kopiluwak
Upload response:
<pre>Array
(
[body] =>
[status] => 308
[headers] => Array
(
[] =>
[HTTP/1.1 308 Resume Incomplete] =>
[Server] => Vimeo/1.0
[Content-Type] => text/plain
[Access-Control-Allow-Origin] => *
[Timing-Allow-Origin] => *
[Access-Control-Expose-Headers] => Range
[Access-Control-Allow-Headers] => Content-Type, Content-Range, X-Requested-With
[X-Requested-With] => XMLHttpRequest
[Access-Control-Allow-Methods] => POST, PUT, GET, OPTIONS
[Content-Length] => 0
[Connection] => close
[Range] => bytes=0-29158540
[Date] => Thu, 14 Apr 2016 08
[X-Backend-Server] => kopiluwak
)
)
</pre>
CURL DELETE:
<pre>Array
(
[47] => 1
[10036] => DELETE
[10015] =>
[10023] => Array
(
[0] => Accept: application/vnd.vimeo.*+json; version=3.2
[1] => User-Agent: vimeo.php 1.0; (http://developer.vimeo.com/api/docs)
[2] => Authorization: Bearer xxxx
)
)
</pre>
Response from DELETE:
<pre>Array
(
[body] => Array
(
[error] => Invalid state
)
[status] => 500
[headers] => Array
(
[Server] => nginx
[Content-Type] => application/vnd.vimeo.error+json
[Cache-Control] => no-cache, max-age=315360000
[Strict-Transport-Security] => max-age=15120000; includeSubDomains; preload
[Expires] => Sun, 12 Apr 2026 08
[Accept-Ranges] => bytes
[Via] => 1.1 varnish
[Fastly-Debug-Digest] => 771e16bfeec90f734db73b1b0ee67af1dae1f86d0e6c56d4585eb9958a1684b7
[Content-Length] => 25
[Date] => Thu, 14 Apr 2016 08
[Connection] => keep-alive
[X-Served-By] => cache-iad2138-IAD, cache-lcy1126-LCY
[X-Cache] => MISS, MISS
[X-Cache-Hits] => 0, 0
[X-Timer] => S1460621123.195320,VS0,VE593
[Vary] => Accept,Vimeo-Client-Id,Accept-Encoding
)
)
</pre>
I just replied to the same issue over on the Vimeo forum, and another SO thread I read - I had the same issue and am simply posting it here as there didn't seem to be a solution on this particular thread.
Also, regarding your post - there's not a lot of information provided in your post. Your delete request is not all that's required - the assumption would be that you created a valid ticket request, uploaded properly, THEN tried the del request you posted.
Your response is similar to mine below - if your upload script tried to get a ticket AFTER you already got one on your backend, this issue would popup as it did in my code.
Vimeo post:
https://vimeo.com/forums/api/topic:278394
My solution:
I solved my version of the issue - I think Vimeo corrected some stuff on their API recently because my code did not have a bug and then suddenly one appeared recently. I would bet they added rate limiting on their API gateway or potentially overwriting existing requests to clean up old requests...
Anyhow, here is my fix:
In order to complete a video upload via "Resumable HTTP PUT uploads" (developer.vimeo.com/api/upload/videos), there are 5 steps.
I do everything but the upload through my PHP backend. I was requesting a ticket through PHP as to not expose some secret info through my modified JS frontend (github.com/websemantics/vimeo-upload) but I had not edited out the ticket request properly through the JS code, so the current bug was probably being triggered on that second invalid request (i.e. overwriting or rate limiting my initial valid request through PHP). Once I bypassed the JS "upload" function properly and jumped right to JS "sendFile_", the upload works properly again.
Hope that helps somebody out there!
I have a question would like to ask you, it was about php. My problem when I tried to use php function to extend image extension from an url which has form below :
http://lh3.googleusercontent.com/i_qpu5lXHddZgNaEbzEEz1CaArLCHEmVNuhwVOuDUl0aIyZHuez3s4Uf878y1n9CqB5rld2a7GSAoWzoMgrC
so , for above url is made by Google which is not show use the file name and extesion name. of course I have try to use this below function but still not work :
$image_name = basename($url);
could anyone help me.
If you are downloading the image, you can get the extension using finfo_file().
Else you can look for the content type in the headers sent by the server using get_headers()
example code
<?php
$url = 'http://lh3.googleusercontent.com/i_qpu5lXHddZgNaEbzEEz1CaArLCHEmVNuhwVOuDUl0aIyZHuez3s4Uf878y1n9CqB5rld2a7GSAoWzoMgrC';
print_r(get_headers($url));
?>
sample output
Array
(
[0] => HTTP/1.1 200 OK
[1] => Access-Control-Allow-Origin: *
[2] => ETag: "v1"
[3] => Expires: Wed, 22 Apr 2015 09:10:30 GMT
[4] => Cache-Control: public, max-age=86400, no-transform
[5] => Content-Disposition: inline;filename="unnamed.png"
[6] => Content-Type: image/png
[7] => Date: Tue, 21 Apr 2015 09:10:30 GMT
[8] => Server: fife
[9] => Content-Length: 20365
[10] => X-XSS-Protection: 1; mode=block
[11] => Alternate-Protocol: 80:quic,p=1
)
On my site I have a couple links for downloading a file, but I want to make a php script that check if the download link is still online.
This is the code I'm using:
$cl = curl_init($url);
curl_setopt($cl,CURLOPT_CONNECTTIMEOUT,10);
curl_setopt($cl,CURLOPT_HEADER,true);
curl_setopt($cl,CURLOPT_NOBODY,true);
curl_setopt($cl,CURLOPT_RETURNTRANSFER,true);
if(!curl_exec($cl)){
echo 'The download link is offline';
die();
}
$code = curl_getinfo($cl, CURLINFO_HTTP_CODE);
if($code != 200){
echo 'The download link is offline';
}else{
echo 'The download link is online!';
}
The problem is that it downloads the whole file which makes it really slow, and I only need to check the headers. I saw that curl has an option CURLOPT_CONNECT_ONLY, but the webhost I'm using has php version 5.4 which doesn't have that option. Is there any other way I can do this?
CURLOPT_CONNECT_ONLY would be good, but it’s only available in PHP 5.5 & abodes. So instead, try using get_headers. Or even use another method using fopen, stream_context_create & stream_get_meta_data. First the get_headers method:
// Set a test URL.
$url = "https://www.google.com/";
// Get the headers.
$headers = get_headers($url);
// Check if the headers are empty.
if(empty($headers)){
echo 'The download link is offline';
die();
}
// Use a regex to see if the response code is 200.
preg_match('/\b200\b/', $headers[0], $matches);
// Act on whether the matches are empty or not.
if(empty($matches)){
echo 'The download link is offline';
}
else{
echo 'The download link is online!';
}
// Dump the array of headers for debugging.
echo '<pre>';
print_r($headers);
echo '</pre>';
// Dump the array of matches for debugging.
echo '<pre>';
print_r($matches);
echo '</pre>';
And the output of this—including the dumps used for debugging—would be:
The download link is online!
Array
(
[0] => HTTP/1.0 200 OK
[1] => Date: Sat, 14 Jun 2014 15:56:28 GMT
[2] => Expires: -1
[3] => Cache-Control: private, max-age=0
[4] => Content-Type: text/html; charset=ISO-8859-1
[5] => Set-Cookie: PREF=ID=6e3e1a0d528b0941:FF=0:TM=1402761388:LM=1402761388:S=4YKP2U9qC6aMgxpo; expires=Mon, 13-Jun-2016 15:56:28 GMT; path=/; domain=.google.com
[6] => Set-Cookie: NID=67=Wun72OJYmuA_TQO95WXtbFOK5g-xU53PQZ7dAIBtzCaBWxhXzduHQZfBVPf4LpaK3MVH8ZKbrBIc3-vTKuMlEnMdpWH0mcft5pA_0kCoe4qolDmednpPJqezZF_HyfXD; expires=Sun, 14-Dec-2014 15:56:28 GMT; path=/; domain=.google.com; HttpOnly
[7] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[8] => Server: gws
[9] => X-XSS-Protection: 1; mode=block
[10] => X-Frame-Options: SAMEORIGIN
[11] => Alternate-Protocol: 443:quic
)
Array
(
[0] => 200
)
And here is another method using fopen, stream_context_create & stream_get_meta_data. The benefit of this method is it gives you a bit more info on what actions were taken to fetch the URL in addition to the headers:
// Set a test URL.
$url = "https://www.google.com/";
// Set the stream_context_create options.
$opts = array(
'http' => array(
'method' => 'HEAD'
)
);
// Create context stream with stream_context_create.
$context = stream_context_create($opts);
// Use fopen with rb (read binary) set and the context set above.
$handle = fopen($url, 'rb', false, $context);
// Get the headers with stream_get_meta_data.
$headers = stream_get_meta_data($handle);
// Close the fopen handle.
fclose($handle);
// Use a regex to see if the response code is 200.
preg_match('/\b200\b/', $headers['wrapper_data'][0], $matches);
// Act on whether the matches are empty or not.
if(empty($matches)){
echo 'The download link is offline';
}
else{
echo 'The download link is online!';
}
// Dump the array of headers for debugging.
echo '<pre>';
print_r($headers);
echo '</pre>';
And here is the output of that:
The download link is online!
Array
(
[wrapper_data] => Array
(
[0] => HTTP/1.0 200 OK
[1] => Date: Sat, 14 Jun 2014 16:14:58 GMT
[2] => Expires: -1
[3] => Cache-Control: private, max-age=0
[4] => Content-Type: text/html; charset=ISO-8859-1
[5] => Set-Cookie: PREF=ID=32f21aea66dcfd5c:FF=0:TM=1402762498:LM=1402762498:S=NVP-y-kW9DktZPAG; expires=Mon, 13-Jun-2016 16:14:58 GMT; path=/; domain=.google.com
[6] => Set-Cookie: NID=67=mO_Ihg4TgCTizpySHRPnxuTp514Hou5STn2UBdjvkzMn4GPZ4e9GHhqyIbwap8XuB8SuhjpaY9ZkVinO4vVOmnk_esKKTDBreIZ1sTCsz2yusNLKA9ht56gRO4uq3B9I; expires=Sun, 14-Dec-2014 16:14:58 GMT; path=/; domain=.google.com; HttpOnly
[7] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[8] => Server: gws
[9] => X-XSS-Protection: 1; mode=block
[10] => X-Frame-Options: SAMEORIGIN
[11] => Alternate-Protocol: 443:quic
)
[wrapper_type] => http
[stream_type] => tcp_socket/ssl
[mode] => rb
[unread_bytes] => 0
[seekable] =>
[uri] => https://www.google.com/
[timed_out] =>
[blocked] => 1
[eof] =>
)
Try add curl_setopt( $cl, CURLOPT_CUSTOMREQUEST, 'HEAD' ); to send HEAD request.
I'm trying to get my subscription list through OAuth2 from Google Reader.
I'm using Codeigniter and a library - CI_OPAUTH. I can get the permissions scren with no problems and give Google Reader permissions. The problem is when i get the callback header after authentication is done it give me an 403 Forbidden header.
Here are my options for the library:
$config['opauth_config'] = array(
'path' => '/mixordia/index.php/oauth/login/', //example: /ci_opauth/auth/login/
'callback_url' => '/mixordia/index.php/oauth/authenticate/', //example: /ci_opauth/auth/authenticate/
'callback_transport' => 'get', //Codeigniter don't use native session
'security_salt' => 'mixordias_salt_whwhwh_wayacross',
'debug' => true,
'Strategy' => array(
'Google' => array(
'client_id' => 'MY_ID',
'client_secret' => 'MY_SECRET',
'service' => 'reader',
'source' => 'APP Name',
'scope' => 'http://www.google.com/reader/api'
)
)
);
Here is what i get:
Array
(
[error] => Array
(
[provider] => Google
[code] => userinfo_error
[message] => Failed when attempting to query for user information
[raw] => Array
(
[response] => 0
[headers] => HTTP/1.0 403 Forbidden
WWW-Authenticate: Bearer realm="https://www.google.com/accounts/AuthSubRequest", error=insufficient_scope, scope="https://www.googleapis.com/auth/userinfo.id h ttps://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/plus.me https://www.googleapis.com/auth/plus.login https://www.google.com/accounts/OAuthLogin"
Content-Type: application/json; charset=UTF-8
Date: Thu, 13 Jun 2013 12:26:25 GMT
Expires: Thu, 13 Jun 2013 12:26:25 GMT
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
)
)
[timestamp] => 2013-06-13T12:26:25+00:00
)
I've stuck in this for 2 days and I don't know what I'm doing wrong in the request with the library. can you help me?
The library tries to get user's profile, which requires userinfo.profile scope (https://www.googleapis.com/auth/userinfo.profile). Add that scope to your config and it should work.