So i'm trying to check if there is a webp image format on the url retrieved from get_the_post_thumbnail_url()
This is not working how I would expect though.
Here is the code im working with:
if (!file_exists($thePostThumbUrl))
$thePostThumbUrl = str_replace("_result.webp", "." . $ext, $thePostThumbUrl);
if I echo the thumb url it gets the correct image with a .webp format
echo $thePostThumbUrl . '<br/ >';
Displays:
image url + _result.webp
I know the version of PHP im working with is PHP/5.6.30
Ok so as Akintunde suggested, the file_exists function wont work with the url of the image. So the code needed to be modified to use the server path instead.
This code does the trick:
$ext = pathinfo($thePostThumbUrl, PATHINFO_EXTENSION);
$thePostThumbPath = str_replace("http://localhost", "", $thePostThumbUrl);
if (!file_exists($_SERVER['DOCUMENT_ROOT'] . $thePostThumbPath)) {
$thePostThumbUrl = str_replace("_result.webp", "." . $ext, $thePostThumbUrl);
}
Thansk Akintunde for pointing me in the right direction :)
I wrote a function that checks whether a given image exists in webp format on the server:
function webpExists($img_src){
$env = array("YOUR_LOCAL_ENV", "YOUR_STAGING_ENV", "YOUR_PROD_ENV");
$img_src_webp = str_replace(array(".jpeg", ".png", ".jpg"), ".webp", $img_src);
$img_path = str_replace($env, "", $img_src_webp);
return file_exists($_SERVER['DOCUMENT_ROOT'] . $img_path);
}
You need to use CURL for this case because it's a URL.
Example:
function checkRemoteFile($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
// don't download content
curl_setopt($ch, CURLOPT_NOBODY, 1);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
if(curl_exec($ch)!==FALSE)
{
return true;
}
else
{
return false;
}
}
Related
I have a script running for a Laravel 5.4 webapplication that is supposed to download a big amount of images (10k). I'm wondering what the best way to handle this would be. I currently grab the base64_encode() data from the remote image and write it to a local folder with the function file_put_contents(). This works fine but some images can take more than 10 seconds to download/write, image that times a 10 thousand. Fair enough these images are rather big but I would like to see this process happen faster and thus I am asking for advice!
My current process is like this;
I read a JSON file containing all the image links I have to
download.
I convert the JSON data to an array with json_decode() and I loop through all the links with a foreach() loop and let curl handle the rest.
All the relevant parts of the code look like this:
<?php
// Defining the paths for easy access.
$__filePath = public_path() . DIRECTORY_SEPARATOR . "importImages" . DIRECTORY_SEPARATOR . "images" . DIRECTORY_SEPARATOR . "downloadList.json";
$__imagePath = public_path() . DIRECTORY_SEPARATOR . "importImages" . DIRECTORY_SEPARATOR . "images";
// Decode the json array into an array readable by PHP.
$this->imagesToDownloadList = json_decode(file_get_contents($__filePath));
// Let's loop through the image list and try to download
// all of the images that are present within the array.
foreach ($this->imagesToDownloadList as $IAN => $imageData) {
$__imageGetContents = $this->curl_get_contents($imageData->url);
$__imageBase64 = ($__imageGetContents) ? base64_encode($__imageGetContents) : false;
if( !file_put_contents($__imagePath . DIRECTORY_SEPARATOR . $imageData->filename, base64_decode($__imageBase64)) ) {
return false;
}
return true;
}
And the curl_get_contents functions looks like this:
<?php
private function curl_get_contents($url)
{
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
I hope someone could englighten me with possible improvements that I could apply on the current way I'm handling this mass-download.
I am trying to implement all of this in
I have a PHP file index.php running on a webserver (WS) on which clients upload files.
I have another server which is powerful enough (GPUs) to process these files.
My use case is, clients upload images which are sent via a POST request to index.php. Now, it has to send the file to another server (GPU) and on GPU, another PHP file, say process.php has to take this image, process it.
So far, I think I can implement the above with PHP's cURL library.
My question is mostly about how do I get the processed image back to the client?
How do I make process.php send back the processed image to index.php and get it back to the client?
This must be a routine task but I would appreciate any help in implementing this.
code for index.php, I am storing the file on the webserver because I need to show a comparison (Before / After) once the processing is done. I have not yet implemented process.php
<?php
$ds = DIRECTORY_SEPARATOR;
$storeFolder = 'uploads';
if (!empty($_FILES)) {
$tempFile = $_FILES['file']['tmp_name'];
$targetPath = dirname( __FILE__ ) . $ds. $storeFolder . $ds;
$targetFile = $targetPath. $_FILES['file']['name'];
move_uploaded_file($tempFile,$targetFile);
}
function cURLcheckBasicFunctions() {
if( !function_exists("curl_init") &&
!function_exists("curl_setopt") &&
!function_exists("curl_exec") &&
!function_exists("curl_close") ) return false;
else return true;
}
if( !cURLcheckBasicFunctions() )
{ echo "UNAVAILABLE: cURL Basic Functions"; }
// $url = "129.132.102.52/process.php";
$url = "dump_test.php";
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_POST, 1);
$fp = fopen($targetFile, "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$reply = curl_exec($ch);
curl_close($ch);
fclose($fp);
echo $_FILES['file']['name'];
?>
Sorry for the wait.
This is the script in WS that will receive the file from the client and will send it to GPU server. Notice I changed how the file is sent through curl (it was incorrect):
<?php
$ds = DIRECTORY_SEPARATOR;
$storeFolder = 'uploads';
if (!empty($_FILES)) {
$tempFile = $_FILES['file']['tmp_name'];
$targetPath = dirname( __FILE__ ) . $ds. $storeFolder . $ds;
$targetFile = $targetPath. $_FILES['file']['name'];
move_uploaded_file($tempFile,$targetFile);
}
if(!cURLcheckBasicFunctions() )
{ echo "UNAVAILABLE: cURL Basic Functions"; }
// $url = "129.132.102.52/process.php";
$url = "dump_test.php";
$file = new CURLFile($tempFile);
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, [
'file' => $file,
]);
/**
* As you can see in the script below, the GPU will echo the processed
* file and we will capture it here.
*/
$processedImage = curl_exec($ch);
curl_close($ch);
/**
* And now you can do anything with the processed file.
* For example, let's save it into a file.
*/
file_put_contents('processed_image.jpg', $processedImage);
function cURLcheckBasicFunctions() {
if( !function_exists("curl_init") &&
!function_exists("curl_setopt") &&
!function_exists("curl_exec") &&
!function_exists("curl_close") ) return false;
else return true;
}
And here's the script in the GPU server (this would be process.php):
<?php
$tempFile = $_FILES['file']['tmp_name'];
// Here you would process the file....
// Let's pretend you have the full path to the processed image in the $processedFilePath var.
// Now we will output the processed file contents so the WS server will receive it.
// The header isn't necessary but let's put it.
header('Content-Type: image/jpg');
echo file_get_contents($processedFilePath);
This script will work on PHP 5.5+. If you're using an older version, we would have to change the way the file is sent in the WS script.
Hope this is what you're looking for.
I am trying to get song name / artist name / song length / bitrate etc from a remote .mp3 file such as http://shiro-desu.com/scr/11.mp3 .
I have tried getID3 script but from what i understand it doesn't work for remote files as i got this error: "Remote files are not supported - please copy the file locally first"
Also, this code:
<?php
$tag = id3_get_tag( "http://shiro-desu.com/scr/11.mp3" );
print_r($tag);
?>
did not work either.
"Fatal error: Call to undefined function id3_get_tag() in /home4/shiro/public_html/scr/index.php on line 2"
As you haven't mentioned your error I am considering a common error case undefined function
The error you get (undefined function) means the ID3 extension is not enabled in your PHP configuration:
If you dont have Id3 extension file .Just check here for installation info.
Firstly, I didn’t create this, I’ve just making it easy to understand with a full example.
You can read more of it here, but only because of archive.org.
https://web.archive.org/web/20160106095540/http://designaeon.com/2012/07/read-mp3-tags-without-downloading-it/
To begin, download this library from here: http://getid3.sourceforge.net/
When you open the zip folder, you’ll see ‘getid3’. Save that folder in to your working folder.
Next, create a folder called “temp” in that working folder that the following script is going to be running from.
Basically, what it does is download the first 64k of the file, and then read the metadata from the file.
I enjoy a simple example. I hope this helps.
<?php
require_once("getid3/getid3.php");
$url_media = "http://example.com/myfile.mp3"
$a=getfileinfo($url_media);
echo"<pre>";
echo $a['tags']['id3v2']['album'][0] . "\n";
echo $a['tags']['id3v2']['artist'][0] . "\n";
echo $a['tags']['id3v2']['title'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo "\n-----------------\n";
//print_r($a['tags']['id3v2']['album']);
echo "-----------------\n";
//print_r($a);
echo"</pre>";
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is.
if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
}
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
}
catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
I'm pretty new to php and we're trying to write a plugin for wordpress. We have a server with images on it and we'd like to have the plugin have a list of images to download from the server. It then needs to go through that list and read each image from the server into the $_FILES variable that we can then pass to the wordpress media_handle_upload function.
I've been able to read a remote file with the following code. But I'm not sure where to go from here.
$url = 'http://www.planet-source-code.com/vb/2010Redesign/images/LangugeHomePages/PHP.png';
$img = curl_init();
curl_setopt($img, CURLOPT_URL, $url);
curl_setopt($img, CURLOPT_HEADER, 1);
curl_setopt($img, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($img, CURLOPT_BINARYTRANSFER, 1);
$file = curl_exec($img);
curl_close($img);
$file_array = explode("\n\r", $file, 2);
$header_array = explode("\n", $file_array[0]);
foreach($header_array as $header_value) {
$header_pieces = explode(':', $header_value);
if(count($header_pieces) == 2) {
$headers[$header_pieces[0]] = trim($header_pieces[1]);
}
}
header('Content-type: ' . $headers['Content-Type']);
header('Content-Disposition: ' . $headers['Content-Disposition']);
$imgFile = substr($file_array[1], 1);
echo $imgFile;
Solution: Lookup table
Create images name list or image file path(link) list as json,xml or txt format. so it will act like as lookup table. it can be parse easily (just like RSS feed customization). call the json or xml file and get the data in the form of array. now you can process it easily
I need to check the url is image url or not? How can i do this?
Examples :
http://www.google.com/ is not an image url.
http://www.hoax-slayer.com/images/worlds-strongest-dog.jpg is an image url.
https://stackoverflow.com/search?q=.jpg is not an image url.
http://www.google.com/profiles/c/photos/private/AIbEiAIAAABECK386sLjh92M4AEiC3ZjYXJkX3Bob3RvKigyOTEzMmFmMDI5ODQ3MzQxNWQxY2VlYjYwYmE2ZTA4YzFhNDhlMjBmMAEFQ7chSa4PMFM0qw02kilNVE1Hpw is an image url.
If you want to be absolutely sure, and your PHP is enabled for remote connections, you can just use
getimagesize('url');
If it returns an array, it is an image type recognized by PHP, even if the image extension is not in the url (per your second link). You have to keep in mind that this method will make a remote connection for each request, so perhaps cache urls that you already probed in a database to lower connections.
You can send a HEAD request to the server and then check the Content-type. This way you at least know what the server "thinks" what the type is.
You can check if a url is an image by using the getimagesize function like below.
function validImage($file) {
$size = getimagesize($file);
return (strtolower(substr($size['mime'], 0, 5)) == 'image' ? true : false);
}
$image = validImage('http://www.example.com/image.jpg');
echo 'this image ' . ($image ? ' is' : ' is not') . ' an image file.';
i think that the idea is to get a content of the header url via curl
and check the headers
After calling curl_exec() to get a web page, call curl_getinfo() to get the content type string from the HTTP header
look how to do it in this link :
http://nadeausoftware.com/articles/2007/06/php_tip_how_get_web_page_content_type#IfyouareusingCURL
Can use this:
$is = #getimagesize ($link);
if ( !$is ) $link='';
elseif ( !in_array($is[2], array(1,2,3)) ) $link='';
elseif ( ($is['bits']>=8) ) $srcs[] = $link;
Here is a way that requires curl, but is faster than getimagesize, as it does not download the whole image.
Disclaimer: it checks the headers, and they are not always correct.
function is_url_image($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url );
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_NOBODY, 1);
$output = curl_exec($ch);
curl_close($ch);
$headers = array();
foreach(explode("\n",$output) as $line){
$parts = explode(':' ,$line);
if(count($parts) == 2){
$headers[trim($parts[0])] = trim($parts[1]);
}
}
return isset($headers["Content-Type"]) && strpos($headers['Content-Type'], 'image/') === 0;
}
$ext = strtolower(end(explode('.', $filename)));
switch($ext)
{
case 'jpg':
///Blah
break;
}
Hard version (just trying)
//Turn off E_NOTICE reporting first
if(getimagesize($url) !== false)
{
//Image
}