Context
I have the following POST pipeline:
index.php -> submit.php ->list/item/new/index.php
index.php has a normal form with an action="submit.php" property.
submit.php decides where to send the following post request by some logic based on the POST variable content.
The problem is that I haven't found a successful way to debug this pipeline. Somewhere, something is failing and I would appreciate a fresh pair of eyes.
What I have tried
I have tried running list/item/new/index.php with dummy parameters through a regular GET request. DB updates successfully.
I have tried running submit.php (below) with dummy parameters through a regular GET request. The value of $result is not FALSE, indicating the file_get_contents request was successful, but it's value is the literal content of list/new/index.php instead of the generated content, which I expect to be the result of
echo $db->new($hash,$content) && $db->update_content_key($hash);
Here is submit.php
$url = 'list/new/index.php';
if($test){
$content = $_GET["i"];
$hash = $_GET["h"];
}else{
$content = $_POST["item"]["content"];
$hash = $_POST["list"]["hash"];
}
$data = array(
'item'=>array('content' => $content),
'list'=>array('hash' => $hash)
);
$post_content = http_build_query($data);
$options = array(
'http' => array(
'header' => "Content-type: application/x-www-form-urlencoded\r\n".
"Content-Length: " . strlen($post_content) . "\r\n",
'method' => 'POST',
'content' => $post_content
)
);
$context = stream_context_create($options);
$result = file_get_contents($url, false, $context);
if ($result === FALSE) {
echo "error";
//commenting out for testing. should go back to index.php when it's done
//header('Location: '.$root_folder_path.'list/?h='.$hash.'&f='.$result);
}
else{
var_dump($result);
//commenting out for testing. should go back to index.php when it's done
//header('Location: '.$root_folder_path.'list/?h='.$hash.'&f='.str_replace($root_folder_path,"\n",""));
}
And here is list/item/new/index.php
$db = new sql($root_folder_path."connection_details.php");
if($test){
$content = $_GET["i"];
$hash = $_GET["h"];
}else{
$content = $_POST["item"]["content"];
$hash = $_POST["list"]["hash"];
}
// insert into DB, use preformatted queries to prevent sqlinjection
echo $db->new($hash,$content) && $db->update_content_key($hash);
The worst thing about this is that I don't know enough of PHP to effectively debug this (I actually had it working at some point today but I did not commit right then...).
All comments and suggestions are welcome. I appreciate your time.
Got it.
I'm not sure what to call the error I was making (or what is actually going on behind the scenes) but it was the following:
on the POST request I was using
$url='list/item/new/index.php'
I used the whole url scheme:
$url = 'https://example.com/list/item/new/index.php';`
Here's my problem. A few months ago, I wrote a PHP script to get connected to my account on a website. I was using CURL to get connected and everything was fine. Then, they updated the website and now I am no longer able to get connected. The problem is not with CURL, as I do not get any error from CURL, but it is the website itself which tells me that I am not able.
Here's my script :
<?php
require('simple_html_dom.php');
//Getting the website main page
$url = "http://www.kijiji.ca/h-ville-de-quebec/1700124";
$main = file_get_html($url);
$links = $main -> find('a');
//Finding the login page
foreach($links as $link){
if($link -> innertext == "Ouvrir une session"){
$page = $link;
}
}
$to_go = "http://www.kijiji.ca/".$page->href;
//Getting the login page
$main = file_get_html($to_go);
$form = $main -> find("form");
//Parsing the page for the login form
foreach($form as $f){
if($f -> id == "login-form"){
$cform = $f;
}
}
$form = str_get_html($cform);
//Getting my post data ready
$postdata = "";
$tot = count($form->find("input"));
$count = 0;
/*I've got here a foreach loop to find all the inputs in the form. As there are hidden input for security, I make my script look for all the input and get the value of each, and then add them in my post data. When the name of the input is emailOrNickname or password, I enter my own info there, then it gets added to the post data*/
foreach($form -> find("input") as $input){
$count++;
$postdata .= $input -> name;
$postdata .= "=";
if($input->name == "emailOrNickname"){
$postdata.= "my email address ";
}else if($input->name == "password"){
$postdata.= "my password";
}else{
$postdata .= $input -> value;
}
if($count<$tot){
$postdata .= "&";
}
}
//Getting my curl session
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => $to_go,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POST => true,
CURLOPT_POSTFIELDS => $postdata,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_COOKIESESSION => true,
CURLOPT_COOKIEJAR => 'cookie.txt'
));
$result = curl_exec ($ch);
curl_close ($ch);
echo $result;
?>
CURL nor PHP return any error. In fact, it returns the webpage of the website, but this webpage tells me that there's an error that occurred, as if there was missing some post data.
What do you think can cause that ? Could it be some missing curl_setopts ? I've got no idea, do you have any ?
$form = $main -> find("form") finds first occurrence of element
and that is <form id="SearchForm" action="/b-search.html">
you will need to change that into $form = $main->find('#login-form')
Most likely the problem is that the site (server) checks cookies. This process mainly consists of two phases:
1) When you visit the site first time on some page, e.g. on the login page, the server sets cookies with some data.
2) On each subsequent page visit or POST request the server checks cookies it has set.
So you have to reproduce this process in your script which mean you have to use CURL to get any page from the site, including the login page which should be getting by CURL, not file_get_html.
Furthemore you have to set both CURLOPT_COOKIEJAR and CURLOPT_COOKIEFILE options to the same absolute path value ('cookies.txt' is a relative path) on each request. This is necessary in order to enable cookies auto-handling (session maintaining) within entire series of requests (including redirects) the script will perform.
So, I'm working with the Instagram API, but I cannot figure out how to create a like (on a photo) for the logged in user.
So my demo app is currently displaying the feed of a user, and it's requesting the permission to like and comment on behalf of that user. I'm using PHP and Curl to make this happen, creds to some guide I found on the internet:
<?php
if($_GET['code']) {
$code = $_GET['code'];
$url = "https://api.instagram.com/oauth/access_token";
$access_token_parameters = array(
'client_id' => '*MY_CLIENT_ID*',
'client_secret' => '*MY_CLIENT_SECRET*',
'grant_type' => 'authorization_code',
'redirect_uri' => '*MY_REDIRECT_URI*',
'code' => $code
);
$curl = curl_init($url); // we init curl by passing the url
curl_setopt($curl,CURLOPT_POST,true); // to send a POST request
curl_setopt($curl,CURLOPT_POSTFIELDS,$access_token_parameters); // indicate the data to send
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); // to return the transfer as a string of the return value of curl_exec() instead of outputting it out directly.
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); // to stop cURL from verifying the peer's certificate.
$result = curl_exec($curl); // to perform the curl session
curl_close($curl); // to close the curl session
$arr = json_decode($result,true);
$pictureURL = 'https://api.instagram.com/v1/users/self/feed?access_token='.$arr['access_token'];
// to get the user's photos
$curl = curl_init($pictureURL);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$pictures = curl_exec($curl);
curl_close($curl);
$pics = json_decode($pictures,true);
// display the url of the last image in standard resolution
for($i = 0; $i < 17; $i++) {
$id = $pics['data'][$i]['id'];
$lowres_pic = $pics['data'][$i]['images']['low_resolution']['url'];
$username = $pics['data'][$i]['user']['username'];
$profile_pic = $pics['data'][$i]['user']['profile_picture'];
$created_time = $pics['data'][$i]['created_time'];
$created_time = date('d. M - h:i', $created_time);
$insta_header = '<div class="insta_header"><div class="insta_header_pic"><img src="'.$profile_pic.'" height="30px" width="30px"/></div><div class="insta_header_name">'.$username.'</div><div class="insta_header_date">'.$created_time.'</div></div>';
$insta_main = '<div class="insta_main"><img src="'.$lowres_pic.'" /></div>';
$insta_footer = '<div class="insta_footer"><div class="insta_footer_like"><button onClick="insta_like(\''.$id.'\')"> Like </button></div><div class="insta_footer_comment"><form onSubmit="return insta_comment(\''.$id.'\')"><input type="text" id="'.$id.'" value="Comment" /></form></div></div>';
echo '<div class="insta_content">'. $insta_header . $insta_main . $insta_footer .'</div>';
}
}
?>
Now, it might be a stupid question, but how do I make a like on a particular photo on behalf of the user? I'm used to using JavaScript to these kinds of things, therefore I've setup the Like-button with a JS function (which does not exist). But since the Instagram thing have been using Curl and PHP, I'm guessing I have to do the same thing here? I have no experience with Curl, and I do not understand how it works. It would be great if someone could give me a headsup on that as well. But first off, the liking. If it's possible to do it with JS, I'd be very glad. If not, please show me how to do it with PHP and Curl.
Here's a link to the Instagram developers site, which contain the URL you should send a POST request to http://instagram.com/developer/endpoints/likes/.
And if you're not to busy, I'd be really glad if you could show me how to make a comment on behalf of a user as well :)
Thanks in advance.
Aleksander.
I'm using file_get_contents as such
file_get_contents( $url1 ).
However the actual url's contents are coming from $url2.
Here is a specific case:
$url1 = gmail.com
$url2 = mail.google.com
I need a way to grab $url2 progrmatically in PHP or JavaScript.
I believe you can do this by creating a context with:
$context = stream_context_create(array('http' =>
array(
'follow_location' => false
)));
$stream = fopen($url, 'r', false, $context);
$meta = stream_get_meta_data($stream);
The $meta should include (among other things) the status code and the Location header used to hold the redirection url. If $meta indicates a 200, the you can fetch the data with:
$meta = stream_get_contents($stream)
The down side is when you get a 301/302, you have to set up the request again with the url from the Location header. Lather, rinse, repeat.
If your looking to pull the current url, in JS you can use window.location.hostname
I don't get why you would want either PHP or JavaScript. I mean... they are kind of different in approaching the problem.
Assuming you want a server-side PHP solution, there's a comprehensive solution here. Too much code to copy verbatim but:
function follow_redirect($url){
$redirect_url = null;
//they've also coded up an fsockopen alternative if you don't have curl installed
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
//extract the new url from the header
$pos = strpos($response, "Location: ");
if($pos === false){
return false;//no new url means it's the "final" redirect
} else {
$pos += strlen($header);
$redirect_url = substr($response, $pos, strpos($response, "\r\n", $pos)-$pos);
return $redirect_url;
}
}
//output all the urls until the final redirect
//you could do whatever you want with these
while(($newurl = follow_redirect($url)) !== false){
echo $url, '<br/>';
$url = $newurl;
}
The best I could find, an if fclose fopen type thing, makes the page load really slowly.
Basically what I'm trying to do is the following: I have a list of websites, and I want to display their favicons next to them. However, if a site doesn't have one, I'd like to replace it with another image rather than display a broken image.
You can instruct curl to use the HTTP HEAD method via CURLOPT_NOBODY.
More or less
$ch = curl_init("http://www.example.com/favicon.ico");
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$retcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// $retcode >= 400 -> not found, $retcode = 200, found.
curl_close($ch);
Anyway, you only save the cost of the HTTP transfer, not the TCP connection establishment and closing. And being favicons small, you might not see much improvement.
Caching the result locally seems a good idea if it turns out to be too slow.
HEAD checks the time of the file, and returns it in the headers. You can do like browsers and get the CURLINFO_FILETIME of the icon.
In your cache you can store the URL => [ favicon, timestamp ]. You can then compare the timestamp and reload the favicon.
As Pies say you can use cURL. You can get cURL to only give you the headers, and not the body, which might make it faster. A bad domain could always take a while because you will be waiting for the request to time-out; you could probably change the timeout length using cURL.
Here is example:
function remoteFileExists($url) {
$curl = curl_init($url);
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
$ret = false;
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 200) {
$ret = true;
}
}
curl_close($curl);
return $ret;
}
$exists = remoteFileExists('http://stackoverflow.com/favicon.ico');
if ($exists) {
echo 'file exists';
} else {
echo 'file does not exist';
}
CoolGoose's solution is good but this is faster for large files (as it only tries to read 1 byte):
if (false === file_get_contents("http://example.com/path/to/image",0,null,0,1)) {
$image = $default_image;
}
This is not an answer to your original question, but a better way of doing what you're trying to do:
Instead of actually trying to get the site's favicon directly (which is a royal pain given it could be /favicon.png, /favicon.ico, /favicon.gif, or even /path/to/favicon.png), use google:
<img src="http://www.google.com/s2/favicons?domain=[domain]">
Done.
A complete function of the most voted answer:
function remote_file_exists($url)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_NOBODY, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); # handles 301/2 redirects
curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if( $httpCode == 200 ){return true;}
}
You can use it like this:
if(remote_file_exists($url))
{
//file exists, do something
}
If you are dealing with images, use getimagesize. Unlike file_exists, this built-in function supports remote files. It will return an array that contains the image information (width, height, type..etc). All you have to do is to check the first element in the array (the width). use print_r to output the content of the array
$imageArray = getimagesize("http://www.example.com/image.jpg");
if($imageArray[0])
{
echo "it's an image and here is the image's info<br>";
print_r($imageArray);
}
else
{
echo "invalid image";
}
if (false === file_get_contents("http://example.com/path/to/image")) {
$image = $default_image;
}
Should work ;)
This can be done by obtaining the HTTP Status code (404 = not found) which is possible with file_get_contentsDocs making use of context options. The following code takes redirects into account and will return the status code of the final destination (Demo):
$url = 'http://example.com/';
$code = FALSE;
$options['http'] = array(
'method' => "HEAD",
'ignore_errors' => 1
);
$body = file_get_contents($url, NULL, stream_context_create($options));
foreach($http_response_header as $header)
sscanf($header, 'HTTP/%*d.%*d %d', $code);
echo "Status code: $code";
If you don't want to follow redirects, you can do it similar (Demo):
$url = 'http://example.com/';
$code = FALSE;
$options['http'] = array(
'method' => "HEAD",
'ignore_errors' => 1,
'max_redirects' => 0
);
$body = file_get_contents($url, NULL, stream_context_create($options));
sscanf($http_response_header[0], 'HTTP/%*d.%*d %d', $code);
echo "Status code: $code";
Some of the functions, options and variables in use are explained with more detail on a blog post I've written: HEAD first with PHP Streams.
PHP's inbuilt functions may not work for checking URL if allow_url_fopen setting is set to off for security reasons. Curl is a better option as we would not need to change our code at later stage. Below is the code I used to verify a valid URL:
$url = str_replace(' ', '%20', $url);
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if($httpcode>=200 && $httpcode<300){ return true; } else { return false; }
Kindly note the CURLOPT_SSL_VERIFYPEER option which also verify the URL's starting with HTTPS.
To check for the existence of images, exif_imagetype should be preferred over getimagesize, as it is much faster.
To suppress the E_NOTICE, just prepend the error control operator (#).
if (#exif_imagetype($filename)) {
// Image exist
}
As a bonus, with the returned value (IMAGETYPE_XXX) from exif_imagetype we could also get the mime-type or file-extension with image_type_to_mime_type / image_type_to_extension.
A radical solution would be to display the favicons as background images in a div above your default icon. That way, all overhead would be placed on the client while still not displaying broken images (missing background images are ignored in all browsers AFAIK).
You could use the following:
$file = 'http://mysite.co.za/images/favicon.ico';
$file_exists = (#fopen($file, "r")) ? true : false;
Worked for me when trying to check if an image exists on the URL
function remote_file_exists($url){
return(bool)preg_match('~HTTP/1\.\d\s+200\s+OK~', #current(get_headers($url)));
}
$ff = "http://www.emeditor.com/pub/emed32_11.0.5.exe";
if(remote_file_exists($ff)){
echo "file exist!";
}
else{
echo "file not exist!!!";
}
This works for me to check if a remote file exist in PHP:
$url = 'https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico';
$header_response = get_headers($url, 1);
if ( strpos( $header_response[0], "404" ) !== false ) {
echo 'File does NOT exist';
} else {
echo 'File exists';
}
You can use :
$url=getimagesize(“http://www.flickr.com/photos/27505599#N07/2564389539/”);
if(!is_array($url))
{
$default_image =”…/directoryFolder/junal.jpg”;
}
If you're using the Laravel framework or guzzle package, there is also a much simpler way using the guzzle client, it also works when links are redirected:
$client = new \GuzzleHttp\Client(['allow_redirects' => ['track_redirects' => true]]);
try {
$response = $client->request('GET', 'your/url');
if ($response->getStatusCode() != 200) {
// not exists
}
} catch (\GuzzleHttp\Exception\GuzzleException $e) {
// not exists
}
More in Document : https://docs.guzzlephp.org/en/latest/faq.html#how-can-i-track-redirected-requests
You should issue HEAD requests, not GET one, because you don't need the URI contents at all. As Pies said above, you should check for status code (in 200-299 ranges, and you may optionally follow 3xx redirects).
The answers question contain a lot of code examples which may be helpful: PHP / Curl: HEAD Request takes a long time on some sites
There's an even more sophisticated alternative. You can do the checking all client-side using a JQuery trick.
$('a[href^="http://"]').filter(function(){
return this.hostname && this.hostname !== location.hostname;
}).each(function() {
var link = jQuery(this);
var faviconURL =
link.attr('href').replace(/^(http:\/\/[^\/]+).*$/, '$1')+'/favicon.ico';
var faviconIMG = jQuery('<img src="favicon.png" alt="" />')['appendTo'](link);
var extImg = new Image();
extImg.src = faviconURL;
if (extImg.complete)
faviconIMG.attr('src', faviconURL);
else
extImg.onload = function() { faviconIMG.attr('src', faviconURL); };
});
From http://snipplr.com/view/18782/add-a-favicon-near-external-links-with-jquery/ (the original blog is presently down)
all the answers here that use get_headers() are doing a GET request.
It's much faster/cheaper to just do a HEAD request.
To make sure that get_headers() does a HEAD request instead of a GET you should add this:
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
so to check if a file exists, your code would look something like this:
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
$headers = get_headers('http://website.com/dir/file.jpg', 1);
$file_found = stristr($headers[0], '200');
$file_found will return either false or true, obviously.
If the file is not hosted external you might translate the remote URL to an absolute Path on your webserver. That way you don't have to call CURL or file_get_contents, etc.
function remoteFileExists($url) {
$root = realpath($_SERVER["DOCUMENT_ROOT"]);
$urlParts = parse_url( $url );
if ( !isset( $urlParts['path'] ) )
return false;
if ( is_file( $root . $urlParts['path'] ) )
return true;
else
return false;
}
remoteFileExists( 'https://www.yourdomain.com/path/to/remote/image.png' );
Note: Your webserver must populate DOCUMENT_ROOT to use this function
Don't know if this one is any faster when the file does not exist remotely, is_file(), but you could give it a shot.
$favIcon = 'default FavIcon';
if(is_file($remotePath)) {
$favIcon = file_get_contents($remotePath);
}
If you're using the Symfony framework, there is also a much simpler way using the HttpClientInterface:
private function remoteFileExists(string $url, HttpClientInterface $client): bool {
$response = $client->request(
'GET',
$url //e.g. http://example.com/file.txt
);
return $response->getStatusCode() == 200;
}
The docs for the HttpClient are also very good and maybe worth looking into if you need a more specific approach: https://symfony.com/doc/current/http_client.html
You can use the filesystem:
use Symfony\Component\Filesystem\Filesystem;
use Symfony\Component\Filesystem\Exception\IOExceptionInterface;
and check
$fileSystem = new Filesystem();
if ($fileSystem->exists('path_to_file')==true) {...
Please check this URL. I believe it will help you. They provide two ways to overcome this with a bit of explanation.
Try this one.
// Remote file url
$remoteFile = 'https://www.example.com/files/project.zip';
// Initialize cURL
$ch = curl_init($remoteFile);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_exec($ch);
$responseCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
// Check the response code
if($responseCode == 200){
echo 'File exists';
}else{
echo 'File not found';
}
or visit the URL
https://www.codexworld.com/how-to/check-if-remote-file-exists-url-php/#:~:text=The%20file_exists()%20function%20in,a%20remote%20server%20using%20PHP.