PHP-How to re display images on website from fetched URL IMGS - php

So I'm a bit stuck, and I've been given various solutions, none of which work. Any hotshot PHP folks out there? Here's the deal, I'm trying to get an image to display on my website, from another website, that has a randomly generated IMG. Though I'm actually trying to do this off a personal art site of mine, this example will serve perfectly.
http://commons.wikimedia.org/wiki/Special:Random/File
A random image page with an image on it pops up with that link. Now, I'd like to display THAT random image, or whatever image comes up, on another site. The two possible solutions I have encountered is gathering an array of URL LINKS from a given link. And then re displaying that array as images on another site, like a: < a href="https
The code I get back from what I'm talking about looks like this:
Array
(
[0] => https ://kfjhiakwhefkiujahefawef/awoefjoiwejfowe.jpg
[1] => https ://oawiejfoiaewjfoajfeaweoif/awoeifjao;iwejfoawiefj.png
)
Instead of the print out however, I'd like the actual images displayed, well specifically array [0], but one thing at a time. The code that's actually doing this is:
<?php
/*
Credits: Bit Repository
URL: http://www.bitrepository.com/
*/
$url = 'http://commons.wikimedia.org/wiki/Special:Random/File';
// Fetch page
$string = FetchPage($url);
// Regex that extracts the images (full tag)
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex, $string, $out, PREG_PATTERN_ORDER);
$img_tag_array = $out[0];
echo "<pre>"; print_r($img_tag_array); echo "</pre>";
// Regex for SRC Value
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex_src_url, $string, $out, PREG_PATTERN_ORDER);
$images_url_array = $out[1];
echo "<pre>"; print_r($images_url_array); echo "</pre>";
// Fetch Page Function
function FetchPage($path)
{
$file = fopen($path, "r");
if (!$file)
{
exit("The was a connection error!");
}
$data = '';
while (!feof($file))
{
// Extract the data from the file / url
$data .= fgets($file, 1024);
}
return $data;
}
for($i=0; $i<count($arr1); $i++) {
echo '<img src="'.$arr1[$i].'">';
}
?>
Solution two,
Use a file_get_contents command. Which is this:
<?php
$html =
file_get_contents("http://commons.wikimedia.org/wiki/Special:Random/File");
libxml_use_internal_errors(true);
$dom = new DOMDocument();
$dom->loadHTML($html);
$xpath = new DOMXPath($dom);
$image_src = $xpath->query('//div[contains(#class,"fullImageLink")]/a/img')
[0]->getAttribute('src') ;
echo "<img src='$image_src'><br>";
?>
However, there's unfortunately an error message I get: Fatal error: Cannot use object of type DOMNodeList as array in /home/wilsons888/public_html/wiki.php on line 11. Or, if I remove a "}" at the end, I just get a blank page.
I have been told that the above code will work, but with openssl extension included. Problem is, I have no idea how to do this. (I'm very new to PHP). Anyone know how to plug it in, so to speak? Thank you so much! I feel like I'm close, just missing the last element.

I was able to load the random image, and "print it" as an image directly (so you can embed the php file directly on the IMG tag) using this code:
<?php
$html = file_get_contents("http://commons.wikimedia.org/wiki/Special:Random/File");
$dom = new DOMDocument();
$dom->loadHTML($html);
$remoteImage = $dom->getElementById("file")->firstChild->attributes[0]->textContent;
header("Content-type: image/png");
header('Content-Length: ' . filesize($remoteImage));
echo file_get_contents($remoteImage);
?>
Get a new file called showImage.php and put this code in it:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>
<body>
<img src="test.php">
</body>
</html>
Next, go to your browser and get the showImage.php path, and will show a random image fromt he site you asked...

Related

php getimagesize with persian file name

I'm trying to write an Joomla plugin to add width and height tag to each <img> in HTML file.
Some image file names are Persian, and getimagesize faces error.
The code is this:
#$dom->loadHTML('<?xml version="1.0" encoding="UTF-8"?>' . "\n" . '
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<img src="images\banners\س.jpg" style="max-width: 90%;" >
</body>
</html>
');
$x = new DOMXPath($dom);
foreach($x->query("//img") as $node)
{
$imgtag = $node->getAttribute("src");
$imgtag = pathinfo($imgtag);
$imgtag = $imgtag['dirname'].'\\'.$imgtag['basename'];
$imgtag = getimagesize($imgtag);
$node->setAttribute("width",$imgtag[0]);
$node->setAttribute("height",$imgtag[1]);
}
$newHtml = urldecode($dom->saveHtml($dom->documentElement));
And when Persian characters exist in file name, getimagesize shows:
Warning: getimagesize(images\banners\س.jpg): failed to open stream: No such file or directory in C:\wamp64\www\plugin.php
How can I solve this?
Thanks to all,
I couldn't reach to results on WAMP server (local server on Windows),
but when I migrated to Linux server, finally this code worked properly.
$html = $app->getBody();
setlocale(LC_ALL, '');
$dom = new DOMDocument();
#$dom->loadHTML($html);
$x = new DOMXPath($dom);
foreach($x->query("//img") as $node)
{
$imgtag = $node->getAttribute("src");
if(strpos($imgtag,"data:image")===false)
{
$imgtag = getimagesize($imgtag);
$node->setAttribute("width",$imgtag[0]);
$node->setAttribute("height",$imgtag[1]);
}
}
$bodytag = $x->query("//body");
$node = $dom->createElement("script", ' /* java script which may be necessary on client */ ');
$bodytag[0]->appendChild($node);
$html = '<!DOCTYPE html>'."\n" . $dom->saveHtml($dom->documentElement);
Some hints:
the code, shouldn't touch base64 image sources, so I added an condition to the code.
if some script (or whatever, div, p, ....) should be added to body tag, you can use appendChild method.
<!DOCTYPE html> should be added to final DOM object output :)

how to find http from saved file in php

I created a program in php using CURL, in which i can take data of any site and can display it in the browser. Another part of the program is that the data can be saved in the file using file handling and after saving this data, I can find all the http links within the body tag of the saved file. My code is showing all the sites in the browser which I took, but I can not find the http links and some unnecessary code is also occurring like this image, though I don't want it to come.
https://www.screencast.com/t/Nwaz93oU
PHP Code:
<!DOCTYPE html>
<html>
<?php
function get_all_links(){
$html = file_get_contents('http://www.ucertify.com');
$dom = new DOMDocument();
#$dom->loadHTML($html);
$xpath = new DOMXPath($dom);
$hrefs = $xpath->evaluate("/html/body//a");
for ($i = 0; $i < $hrefs->length; $i++) {
$href = $hrefs->item($i);
$url = $href->getAttribute('href');
echo $url.'<br />';
}
}
function get_site_data($uc_url){
$get_uc = curl_init();
curl_setopt($get_uc,CURLOPT_URL,$uc_url);
curl_setopt($get_uc,CURLOPT_RETURNTRANSFER,true);
$output=curl_exec($get_uc);
curl_close($get_uc);
$fp=fopen("mohit.txt","w");
fputs($fp,$output);
return $output;
}
?>
<body>
<div>
<?php
$site_content = get_site_data("http://www.ucertify.com");
echo $site_content;
?>
</div>
<div >
<?php
echo get_all_links("http://www.ucertify.com");
?>
</div>
</body>
</html>
On get_all_links method validate if $url variable is a valid url in some pages may have onclick handler to javascript. In order to validate if a url you can use regex and php's preg_match. Also you can look on What is a good regular expression to match a URL? about the needed regex in order to validate a url.

How to extract name, description and favicon from a site? [duplicate]

This question already has answers here:
How do you parse and process HTML/XML in PHP?
(31 answers)
Closed 9 years ago.
I'm trying to create a social bookmarking site using php and mysql.
When I save a website's URL, I want to be able to save the site's title, favicon and description in a table in my database, then print them on my page using ajax.
How can I extract those elements from a website?
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>
<body>
<?php
$myServer = "localhost";
$myUser = "root";
$myPass = "'100pushups'";
$myDB = "social_bookmarking";
//connection to the database
$connect = mysqli_connect($myServer,$myUser, $myPass)
or die("Couldn't connect to SQLServer on $myServer");
//select a database to work with
$selected = mysqli_select_db($connect, $myDB)
or die("Couldn't open database $myDB");
var_dump($_POST);
//declare the SQL statement that will query the database
$url = "INSERT INTO url (url ) VALUES ('$_POST[url]')";
if (isset($_POST['value']))
{
// Instructions if $_POST['value'] exist
echo 'Your url is ' .$url;
}
$data = get_meta_tags($url);
print_r($data);
if (!mysqli_query($connect, $url)) {
die('Error: ' . mysql_error());
}
else
{
echo "Your information was added to the database";
}
mysqli_close($connect);
?>
</body>
</html>
I know I'm doing something wrong with my url there, but I don't know how to use a variable as an argument in get_meta_tags, since the function only accepts filenames or strings.
You can get the title by using: (courtesy of https://stackoverflow.com/users/54680/jonathan-sampson)
<?php
if ( $_POST["url"] ) {
$doc = new DOMDocument();
#$doc->loadHTML( file_get_contents( $_POST["url"] ) );
$xpt = new DOMXPath( $doc );
$output = $xpt->query("//title")->item(0)->nodeValue;
} else {
$output = "URL not provided";
}
echo $output;
?>
You can get the favicon using:
<?php
$url = $_POST['url'];
$doc = new DOMDocument();
$doc->strictErrorChecking = FALSE;
$doc->loadHTML(file_get_contents($url));
$xml = simplexml_import_dom($doc);
$arr = $xml->xpath('//link[#rel="shortcut icon"]');
echo $arr[0]['href'];
?>
Finally for the description you can use:
<?php
$tags = get_meta_tags($_POST['url']);
$description = $tags['description'];
echo $description;
?>
There are very smart scripts/classes out there that help getting content from the dom. For instance using smart selectors. I recommend using one of those.
This is a nice example:
http://simplehtmldom.sourceforge.net/
To get the content of the page, use file_get_contents or equal function.
You can use file_get_contents() function to get the favicon for a site(unless it thwarts you for https). Example:
$icon = file_get_contents("http://stackoverflow.com/favicon.ico");
// now save it
Another option is using curl. It's an awesome php extension if you know how to use it.
Using these methods, you can fetch the html content from the sites too. And then can parse them any HTML parser library of PHP. Or can use REGEX(which experts doesn't recommend often).

php get all files from a remote directory

I have searched, and searched for 3+ hours this morning and tried over 10 different setups for how to grab and display a list of images from a url, and none of them worked correctly. I would either end up with no info displaying, or a 500 error. Can someone point me to an example or help me out here on how to do this properly. file_get_contents is not a viable option.
Example Directory: http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/
Files i know that are in that directory:
001.jpg,
002.jpg,
003.jpg
I would like the output to be the exact url to the file.
Let me know if more info is needed, i'm not 100% sure exactly how to explain it right lol.
Edit:
ok so what I guess i actually want to do is check the url for all the image tags and display a list with the full url to that image.
New to working with this url+images+php stuff so please don't hit me too hard with your downvote hammer with no comments lol.
Code I Tried:
<?php
/*
Credits: Bit Repository
URL: http://www.bitrepository.com/
*/
$url = $location;
// Fetch page
$string = FetchPage($url);
// Regex that extracts the images (full tag)
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex, $string, $out, PREG_PATTERN_ORDER);
$img_tag_array = $out[0];
echo "<pre>"; print_r($img_tag_array); echo "</pre>";
// Regex for SRC Value
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex_src_url, $string, $out, PREG_PATTERN_ORDER);
$images_url_array = $out[1];
echo "<pre>"; print_r($images_url_array); echo "</pre>";
// Fetch Page Function
function FetchPage($path)
{
$file = fopen($path, "r");
if (!$file)
{
exit("The was a connection error!");
}
$data = '';
while (!feof($file))
{
// Extract the data from the file / url
$data .= fgets($file, 1024);
}
return $data;
}
?>
and it returned a blank page
Based loosely on the code you already tried (but was riddled with problems). This grabs the full contents of the URL $url, parses out the <img> src attributes, and then outputs them.
Because this particular web host uses <base href=""/> tag to reset the base part of all URLs on the page, I've added a $base variable which you should set to the contents of the base tag.
Additionally, it looks like this particular web host has some pretty smart anti-hotlinking in place, so not all images may be visible.
But! Give it a whirl, let me know if it does what you need it to, and any questions.
<?php
$url = 'http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/';
$base = 'http://www.webtoonlive.com/';
// Pull in the external HTML contents
$contents = file_get_contents( $url );
// Use Regular Expressions to match all <img src="???" />
preg_match_all( '/<img[^>]*src=[\"|\'](.*)[\"|\']/Ui', $contents, $out, PREG_PATTERN_ORDER);
foreach ( $out[1] as $k=>$v ){ // Step through all SRC's
// Prepend the URL with the $base URL (if needed)
if ( strpos( $v, 'http://' ) !== true ) $v = $base . $v;
// Output a link to the URL
echo '' . $v . '<br/>';
}
Sample output:
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/000.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/001.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/002.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/003.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/004.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/005.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/006.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/007.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/008.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/009.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/010.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/011.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/012.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/013.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/014.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/015.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/016.jpg

Colour Extract Script - ForEach Error

So I downloaded and edited a script off the internet to pull an image and find out the hex values it contains and their percentages:
The script is here:
<?php
$delta = 5;
$reduce_brightness = true;
$reduce_gradients = true;
$num_results = 5;
include_once("colors.inc.php");
$ex=new GetMostCommonColors();
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<title>Colour Verification</title>
</head>
<body>
<div id="wrap">
<img src="http://www.image.come/image.png" alt="test image" />
<?php
$colors=$ex->Get_Color("http://www.image.come/image.png", $num_results, $reduce_brightness, $reduce_gradients, $delta);
$success = true;
foreach ( $colors as $hex => $count ) {
if ($hex !== 'e6af23') {$success = false; }
if ($hex == 'e6af23' && $count > 0.05) {$success = true; break;}
}
if ($success == true) { echo "This is the correct colour. Success!"; } else { echo "This is NOT the correct colour. Failure!"; }
?>
</div>
</body>
</html>
Here is a pastebin link to the file colors.inc.php
http://pastebin.com/phUe5Pad
Now the script works absolutely fine if I use an image that is on the server, eg use /image.png in the Get_Color function. However, if I try and use an image from another website including a http://www.site.com/image.png then the script no longer works and this error appears:
Warning: Invalid argument supplied for foreach() in ... on line 22
Is anyone able to see a way that I would be able to hotlink to images because this was the whole point of using the script!
You must download a file to the server and pass its full filename to the method Get_Color($img) as $img parameter.
So, you need to investigate another SO question: Download File to server from URL
The error indicates that the value returned by Get_Color is not a valid object that can be iterated on, probably not a collection. You need to know how the Get_Color works internally and what is returned when it doesn't get what it wants.
In the mean-time, you can download [with PHP] the image from the external url into your site, and into the required folder and read the image from there.
$image = "http://www.image.come/image.png";
download($image, 'folderName'); //your custom function
dnld_image_name = getNameOfImage();
$colors=$ex->Get_Color("/foldername/dnld_image_name.png");
By the way, did you confirm that the image url was correct?

Categories