I´m drawing an image in PHP from another website, which works. And I´m getting info and from a webpage, which is working. But together it isn´t and only shows the image.
If I cut the picture part and put it at the end it gives an error.
(Use for example nickname Mazey if you need to check out how that JSON page look like)
Has it to do with combining file get contents and my curl function?
<?php
$profPic = json_decode( file_get_contents( 'https://api.kag2d.com/player/'.urlencode( $_GET['nickname'] ).'/avatar/s' ) );
if ( $profPic ) {
if ( isset( $profPic->small ) ) {
$profPic = $profPic->small;
$extension = strtolower( pathinfo( $profPic, PATHINFO_EXTENSION ) );
$content = file_get_contents( $profPic );
if ( $content ) {
header( "Content-type: image/".$extension );
echo $content;
exit();
}
}
} else {
echo "error";
}
$stats = json_decode(file_get_contents('https://api.kag2d.com/player/'.urlencode( $_GET['nickname'] ).'/status' ));
if ( $stats ) {
if ( isset ( $stats->playerInfo ) ) {
echo $stats->playerInfo->username;
} else {
echo 'Error';
}
} else {
echo 'Error';
}
?>
You are attempting to return two different HTTP responses in the same file. If you read the section for images, it does an exit() after printing the image data. This why it stops processing.
When I ran the URLs included in the source, the username returned is the same as the value in the $_GET['nickname']. Do you actually need to look this value up? It would work if you just returned the image.
A solution is to return a JSON document like your third party server is doing, and embed all the information in this. It is legal HTML to have image data as an URL, so the entire thing could be returned as single JSON document. HTML based example from the RFC2397
<IMG SRC="data:image/gif;base64,R0lGODdhMAAwAPAAAAAAAP///ywAAAA
AMAAwAAAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvUUlvONmOZt
fzgFzByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP5yLWGsEbt
LiOSpa/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfnycQZXZeYG
ejmJlZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtwvKOzrcd3i
q9uisF81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f1cf5VWzXy
ym7PHhhx4dbgYKAAA7" ALT="Larry">
Unlike email, noone seems to be developing multi-part mime responses inside HTTP. I did a quick google, but no useful results.
Related
this has to do with wordpress and php. i have a function that takes an array of names from json. for each name i loop through, it creates a div with the name and an image inside. this div is then attached to a post and displayed on the front end. the problem is the image doesn't display correctly due to a 404 error. when i looked at the image source, the path to the image looked like this:
<img src="\"http://localhost/card-store/wp-content/themes/card-store-theme/images/baseball/team2.jpg\"">
clearly the path is broken so the 404 makes sense. seems like php is trying to escape some quotes, so i tried removing this with the str_replace, and as a shot in the dark i also tried html_entity_decode. also tried an absolute path to my images but that did not work either. when i refresh the page, the images appear fine, so i think its something to do with it not compiling right away? if that is true, how can i get it to display correctly without refreshing the page?
function test_function() {
if ( isset($_POST) ) {
$nameData = $_POST['nameData'];
//Strip any double escapes then use json_decode to create an array.
$nameDecode = json_decode(str_replace('\\', '', $_POST['nameData']));
//loop through names array and create a container for each
$html_string = "";
foreach ($nameDecode as $keyIndex => $name) {
$html_string .= '<div class="team-container team-container--inline col col--md-2 col--lg-2 col--xl-2"><img src="'. get_template_directory_uri() .'/images/baseball/team' . $keyIndex . '.jpg"> <p>'.$name.' <p /></div>';
}
echo ( $html_string);
//$html_final = str_replace('\\', '', $html_string);
$html_final = html_entity_decode($html_string);
// update teams post
if($html_string != "") {
$my_post = array(
'ID' => 9,
'post_content' => $html_final,
);
// Update the post into the database
wp_update_post( $my_post );
} else {
echo 'html string is empty!';
}
}
die();
}
If you want to remove the slashes try this.
$str = "Is your name O\'reilly?";
// Outputs: Is your name O'reilly?
echo stripslashes($str);
$nameData = $_POST['nameData'];
$stringObject = json_decode(stripslashes($nameData));
For json encode
json_encode($html_string, JSON_UNESCAPED_SLASHES);
I'll start by saying I'm fairly new to coding so I'm probably going about this the wrong way.
Basically I've got the below php function that changes urls to the page title of the url instead of a plain web address. So instead of www.google.com it would appear as Google.
<?php
function get_title($url){
$str = file_get_contents($url);
if(strlen($str)>0){
$str = trim(preg_replace('/\s+/', ' ', $str)); // supports line breaks inside <title>
preg_match("/\<title\>(.*)\<\/title\>/i",$str,$title); // ignore case
return $title[1];
}
}
?>
This is great but to implement this I have to use the below code.
echo get_title("http://www.google.com/");
However this just works on a predefined URL. What I have set up on my site at the moment is a shortcode in a html widget.
<a href='[rwmb_meta meta_key="link_1"]'>[rwmb_meta meta_key="link_1"]</a>
This shortcode displays a url/link that is input by the user in the backend of Wordpress and displays it on the frontend as a link. However I want to apply the get_title function to the above shortcode so instead of the web address it shows the page title.
Is this possible?
Thanks in advance.
for name of a url from a link you can use parse_url($url, PHP_URL_HOST);
easier way would be to have an array of links for example
$links[] = 'some1 url here';
$links[] = 'some2 url here';
then just loop your $links array with the function.
foreach($links as $link)get_title($link);
https://metabox.io/docs/get-meta-value/
try:
$files = rwmb_meta( 'info' ); // Since 4.8.0
$files = rwmb_meta( 'info', 'type=file' ); // Prior to 4.8.0
if ( !empty( $files ) ) {
foreach ( $files as $file ) {
echo $file['url'];
}
}
Suppose this code prints Youtube:
<?php ytio_empt(); ?>
I want a dynamic way to echo the content of the above function in the place of 'YouTube' in the following xml data:
$xmlData = file_get_contents( 'http://gdata.youtube.com/feeds/api/users/'. 'YouTube' );
I have tried:
$xmlData = file_get_contents( 'http://gdata.youtube.com/feeds/api/users/'. ytio_empt() );
But in vain, the file_get_contents() always fails to open stream.
P.S: Perhaps using HTML will work: to put <?php ytio_empt(); ?> in the place of ytio_empt() in $xmlData. I just don't know how to end PHP function and resume it later..
So as you posted your function in the comments:
function ytio_empt() {
if(empty(get_option('ytio_username'))) {
echo esc_attr( get_option('ytio_id') );
//^^^^
} else {
echo esc_attr( get_option('ytio_username') );
//^^^^
}
}
You will see you don't return the values you just print them! So in order to return them you simply have to change echo -> return.
And if you want to read more about return values see the manual: http://php.net/manual/en/functions.returning-values.php
I have searched, and searched for 3+ hours this morning and tried over 10 different setups for how to grab and display a list of images from a url, and none of them worked correctly. I would either end up with no info displaying, or a 500 error. Can someone point me to an example or help me out here on how to do this properly. file_get_contents is not a viable option.
Example Directory: http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/
Files i know that are in that directory:
001.jpg,
002.jpg,
003.jpg
I would like the output to be the exact url to the file.
Let me know if more info is needed, i'm not 100% sure exactly how to explain it right lol.
Edit:
ok so what I guess i actually want to do is check the url for all the image tags and display a list with the full url to that image.
New to working with this url+images+php stuff so please don't hit me too hard with your downvote hammer with no comments lol.
Code I Tried:
<?php
/*
Credits: Bit Repository
URL: http://www.bitrepository.com/
*/
$url = $location;
// Fetch page
$string = FetchPage($url);
// Regex that extracts the images (full tag)
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex, $string, $out, PREG_PATTERN_ORDER);
$img_tag_array = $out[0];
echo "<pre>"; print_r($img_tag_array); echo "</pre>";
// Regex for SRC Value
$image_regex_src_url = '/<img[^>]*'.
'src=[\"|\'](.*)[\"|\']/Ui';
preg_match_all($image_regex_src_url, $string, $out, PREG_PATTERN_ORDER);
$images_url_array = $out[1];
echo "<pre>"; print_r($images_url_array); echo "</pre>";
// Fetch Page Function
function FetchPage($path)
{
$file = fopen($path, "r");
if (!$file)
{
exit("The was a connection error!");
}
$data = '';
while (!feof($file))
{
// Extract the data from the file / url
$data .= fgets($file, 1024);
}
return $data;
}
?>
and it returned a blank page
Based loosely on the code you already tried (but was riddled with problems). This grabs the full contents of the URL $url, parses out the <img> src attributes, and then outputs them.
Because this particular web host uses <base href=""/> tag to reset the base part of all URLs on the page, I've added a $base variable which you should set to the contents of the base tag.
Additionally, it looks like this particular web host has some pretty smart anti-hotlinking in place, so not all images may be visible.
But! Give it a whirl, let me know if it does what you need it to, and any questions.
<?php
$url = 'http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/';
$base = 'http://www.webtoonlive.com/';
// Pull in the external HTML contents
$contents = file_get_contents( $url );
// Use Regular Expressions to match all <img src="???" />
preg_match_all( '/<img[^>]*src=[\"|\'](.*)[\"|\']/Ui', $contents, $out, PREG_PATTERN_ORDER);
foreach ( $out[1] as $k=>$v ){ // Step through all SRC's
// Prepend the URL with the $base URL (if needed)
if ( strpos( $v, 'http://' ) !== true ) $v = $base . $v;
// Output a link to the URL
echo '' . $v . '<br/>';
}
Sample output:
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/000.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/001.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/002.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/003.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/004.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/005.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/006.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/007.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/008.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/009.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/010.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/011.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/012.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/013.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/014.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/015.jpg
http://www.webtoonlive.com/webtoon/fantasy_world_survival/ch02/016.jpg
I'm looking to create a PHP script where, a user will provide a link to a webpage, and it will get the contents of that webpage and based on it's contents, parse the contents.
For example, if a user provides a YouTube link:
http://www.youtube.com/watch?v=xxxxxxxxxxx
Then, it will grab the basic information about that video (thumbnail, embed code?)
Or they might provide a vimeo link:
http://www.vimeo.com/xxxxxx
Or even if they were to provide any link, without a video attached, such as:
http://www.google.com/
And it could grab just the page Title or some meta content.
I'm thinking I'd have to use file_get_contents, but I'm not exactly sure how to use it in this context.
I'm not looking for someone to write the entire code, but perhaps provide me with some tools so that I can accomplish this.
You can use either the curl or the http library. You send a http request, and can use the library to get the information from the http response.
I know this question is quite old, but I'll answer just in case someone hits it looking for the same thing.
Use oEmbed (http://oembed.com/) for YouTube, Vimeo, Wordpress, Slideshare, Hulu, Flickr and many other services. If not in the list or you want to make it more precise, you can use this:
http://simplehtmldom.sourceforge.net/
It's a sort of jQuery for PHP, meaning you can use HTML selectors to get portions of the code (i.e.: all the images, get the contents of a div, return only text (no HTML) contents of a node, etc).
You could do something like this (could be done more elegantly but this is just an example):
require_once("simple_html_dom.php");
function getContent ($item, $contentLength)
{
$raw;
$content = "";
$html;
$images = "";
if (isset ($item->content) && $item->content != "")
{
$raw = $item->content;
$html = str_get_html ($raw);
$content = str_replace("\n", "<BR /><BR />\n\n", trim($html->plaintext));
try
{
foreach($html->find('img') as $image) {
if ($image->width != "1")
{
// Don't include images smaller than 100px height
$include = false;
$height = $image->width;
if ($height != "" && $height >= 100)
{
$include = true;
}
/*else
{
list($width, $height, $type, $attr) = getimagesize($image->src);
if ($height != "" && $height >= 100)
$include = true;
}*/
if ($include == true)
{
$images = $images . '<div class="theImage"><img src="'.$image->src.'" alt="'.$image->alt.'" class="postImage" border="0" /></div>';
}
}
}
}
catch (Exception $e) {
// Do nothing
}
$images = '<div id="images">'.$images.'</div>';
}
else
{
$raw = $item->summary;
$content = str_get_html ($raw)->plaintext;
}
return (substr($content, 0 , $contentLength) . (strlen ($content) > $contentLength ? "..." : "") . $images);
}
file_get_contents() would work in this case assuming that you have allow_fopen_url set to true in your php.ini. What you would do is something like:
$pageContent = #file_get_contents($url);
if ($pageContent) {
preg_match_all('#<embed.*</embed>#', $pageContent, $matches);
$embedStrings = $matches[0];
}
That said, file_get_contents() won't give you much in the way of error handling other receiving the content on success or false on failure. If you would like to have more rich control over the request and access the HTTP response codes, use the curl functions and in particular, curl_get_info, to look at the response codes, mime types, encoding, etc. Once you get the content via either curl or file_get_contents() your code for parsing it to look for the HTML of interest will be the same.
Maybe Thumbshots or Snap already have some of the functionality you want?
I know that's not exactly what you are looking for, but at least for the embedded stuff that might be handy. Also txwikinger already answered your other question. But maybe that helps ypu anyway.