PHP: Get metadata of a remote .mp3 file - php

I am looking for a function that gets the metadata of a .mp3 file from a URL (NOT local .mp3 file on my server).
Also, I don't want to install http://php.net/manual/en/id3.installation.php or anything similar to my server.
I am looking for a standalone function.
Right now i am using this function:
<?php
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="../temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is. if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
} catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded.
Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
To make my question clear : I need a fast standalone function that reads metadata of a remote URL .mp3 file.

This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded. Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
Yeah, well what do you propose? How do you expect to get data if you don't get data? There is no way to have a generic remote HTTP server send you that ID3 data. Really, there is no magic. Think about it.
What you're doing now is already pretty solid, except that it doesn't handle all versions of ID3 and won't work for files with more than 64KB of ID3 tags. What I would do to improve it to is to use multi-cURL.
There are several PHP classes available that make this easier:
https://github.com/jmathai/php-multi-curl
$mc = EpiCurl::getInstance();
$results[] = $mc->addUrl(/* Your stream URL here /*); // Run this in a loop, 10 at a time or so
foreach ($results as $result) {
// Do something with the data.
}

Related

Upload large files to Dropbox via HTTP API

I am currently implementing an upload mechanism for files on my webserver into my Dropbox app directory.
As stated on the API docs, there is the /upload endpoint (https://www.dropbox.com/developers/documentation/http/documentation#files-upload) which accepts files up to 150MB in size. However I‘m dealing with images and videos with a potential size of up to 2GB.
Therefore I need to use the upload_session endpoints. There is an endpoint to start the session (https://www.dropbox.com/developers/documentation/http/documentation#files-upload_session-start), to append data and to finish the session.
What currently is unclear to me is how to exactly use these endpoints. Do I have to split my file on my server into 150MB chunks (how would I do that with a video file?) and then upload the first chunk with /start, the next chunks with /append and the last one with /finish? Or can I just specify the file and the API somehow (??) does the splitting for me? Obviously not, but I somehow can‘t get my head around on how I should calculate, split and store the chunks on my webserver and not lose the session inbetween...
Any advice or further leading links are greatly appreciated. Thank you!
As Greg mentioned in the comments, you decide how to manage the "chunks" of the files. In addition to his .NET example, Dropbox has a good upload session implementation in the JavaScript upload example of the Dropbox API v2 JavaScript SDK.
At a high-level, you're splitting up the file into smaller sizes (aka "chunks") and passing those to the upload_session mechanism in a specific order. The upload mechanism has a few parts that need to be used in the following order:
Call /files/upload_session/start. Use the resulting session_id as a parameter in the following methods so Dropbox knows which session you're interacting with.
Incrementally pass each "chunk" of the file to /files/upload_session/append_v2. A couple things to be aware of:
The first call will return a cursor, which is used to iterate over the file's chunks in a specific order. It gets passed as a parameter in each consecutive call to this method (with the cursor being updated on every response).
The final call must include the property "close": true, which closes the session so it can be uploaded.
Pass the final cursor (and commit info) to /files/upload_session/finish. If you see the new file metadata in the response, then you did it!!
If you're uploading many files instead of large ones, then the /files/upload_session/finish_batch and /files/upload_session/finish_batch/check are the way to go.
I know this is an old post, but here is a fully functional solution for your problem. Maybe anyone else finds it usefull. :)
<?php
$backup_folder = glob('/var/www/test_folder/*.{sql,gz,rar,zip}', GLOB_BRACE); // Accepted file types (sql,gz,rar,zip)
$token = '<ACCESS TOKEN>'; // Dropbox Access Token;
$append_url = 'https://content.dropboxapi.com/2/files/upload_session/append_v2';
$start_url = 'https://content.dropboxapi.com/2/files/upload_session/start';
$finish_url = 'https://content.dropboxapi.com/2/files/upload_session/finish';
if (!empty($backup_folder)) {
foreach ($backup_folder as $single_folder_file) {
$file_name= basename($single_folder_file); // File name
$destination_folder = 'destination_folder'; // Dropbox destination folder
$info_array = array();
$info_array["close"] = false;
$headers = array(
'Authorization: Bearer ' . $token,
'Content-Type: application/octet-stream',
'Dropbox-API-Arg: '.json_encode($info_array)
);
$chunk_size = 50000000; // 50mb
$fp = fopen($single_folder_file, 'rb');
$fileSize = filesize($single_folder_file); // File size
$tosend = $fileSize;
$first = $tosend > $chunk_size ? $chunk_size : $tosend;
$ch = curl_init($start_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, fread($fp, $first));
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
$response = curl_exec($ch);
$tosend -= $first;
$resp = explode('"',$response);
$sesion = $resp[3];
$position = $first;
$info_array["cursor"] = array();
$info_array["cursor"]["session_id"] = $sesion;
while ($tosend > $chunk_size)
{
$info_array["cursor"]["offset"] = $position;
$headers[2] = 'Dropbox-API-Arg: '.json_encode($info_array);
curl_setopt($ch, CURLOPT_URL, $append_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POSTFIELDS, fread($fp, $chunk_size));
curl_exec($ch);
$tosend -= $chunk_size;
$position += $chunk_size;
}
unset($info_array["close"]);
$info_array["cursor"]["offset"] = $position;
$info_array["commit"] = array();
$info_array["commit"]["path"] = '/'. $destination_folder . '/' . $file_name;
$info_array["commit"]["mode"] = array();
$info_array["commit"]["mode"][".tag"] = "overwrite";
$info_array["commit"]["autorename"] = true;
$info_array["commit"]["mute"] = false;
$info_array["commit"]["strict_conflict"] = false;
$headers[2] = 'Dropbox-API-Arg: '. json_encode($info_array);
curl_setopt($ch, CURLOPT_URL, $finish_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POSTFIELDS, $tosend > 0 ? fread($fp, $tosend) : null);
curl_exec($ch);
curl_close($ch);
fclose($fp);
unlink($single_folder_file); // Remove files from server folder
}
}

PHP Get metadata of remote .mp3 file (from URL)

I am trying to get song name / artist name / song length / bitrate etc from a remote .mp3 file such as http://shiro-desu.com/scr/11.mp3 .
I have tried getID3 script but from what i understand it doesn't work for remote files as i got this error: "Remote files are not supported - please copy the file locally first"
Also, this code:
<?php
$tag = id3_get_tag( "http://shiro-desu.com/scr/11.mp3" );
print_r($tag);
?>
did not work either.
"Fatal error: Call to undefined function id3_get_tag() in /home4/shiro/public_html/scr/index.php on line 2"
As you haven't mentioned your error I am considering a common error case undefined function
The error you get (undefined function) means the ID3 extension is not enabled in your PHP configuration:
If you dont have Id3 extension file .Just check here for installation info.
Firstly, I didn’t create this, I’ve just making it easy to understand with a full example.
You can read more of it here, but only because of archive.org.
https://web.archive.org/web/20160106095540/http://designaeon.com/2012/07/read-mp3-tags-without-downloading-it/
To begin, download this library from here: http://getid3.sourceforge.net/
When you open the zip folder, you’ll see ‘getid3’. Save that folder in to your working folder.
Next, create a folder called “temp” in that working folder that the following script is going to be running from.
Basically, what it does is download the first 64k of the file, and then read the metadata from the file.
I enjoy a simple example. I hope this helps.
<?php
require_once("getid3/getid3.php");
$url_media = "http://example.com/myfile.mp3"
$a=getfileinfo($url_media);
echo"<pre>";
echo $a['tags']['id3v2']['album'][0] . "\n";
echo $a['tags']['id3v2']['artist'][0] . "\n";
echo $a['tags']['id3v2']['title'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo $a['tags']['id3v2']['year'][0] . "\n";
echo "\n-----------------\n";
//print_r($a['tags']['id3v2']['album']);
echo "-----------------\n";
//print_r($a);
echo"</pre>";
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is.
if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
}
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
}
catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>

Download file from google drive api to my server using php

1 - I have configure google picker and it is working fine and I select the file from picker and get the file id.
2 - After refresh token etc all process I get the file metadata and get the file export link
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
3 - After that I use this
if ($downloadUrl) {
$request = new Google_HttpRequest($downloadUrl, 'GET', null, null);
$httpRequest = Google_Client::$io->authenticatedRequest($request);
if ($httpRequest->getResponseHttpCode() == 200)
{
$content = $httpRequest->getResponseBody();
print_r($content);
} else {
// An error occurred.
return null;
}
and get this response
[responseBody:protected] => PK��DdocProps/app.xml���
�0D���k�I[ѫ��m
��!����A={���}�
2G�Z�g�V��Bľ֧�n�Ҋ�ap!����fb�d����k}Ikc�_`t<+�(�NJ̽�����#��EU-
�0#���P����........
4 - I use some cURL functions to get file from google drive and save it to server. IN server directory a file created but cropped. I use this code
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
//$downloadUrl value is
/*https://docs.google.com/feeds/download/documents/export/Export?id=1CEt1ya5kKLtgK************IJjDEY5BdfaGI&exportFormat=docx*/
When I put this url into browser it will download file successfully but when I use this url to fetch file with cURL or any php code and try to save it on server it saves corrupted file.
$ch = curl_init();
$source = $downloadUrl;
curl_setopt($ch, CURLOPT_URL, $source);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch);
curl_close ($ch);
$destination = "test/afile5.docx";
$file = fopen($destination, "w+");
fputs($file, $data);
fclose($file);
It result a corrupted file stored on server but whe I use this code to get any file other then google drive I download it successfully on server.
Can any one please help that how to download file from $downloadUrl to my server using php ?

Safe image download from PHP

I want to allow my users to upload a file by providing a URL to the image.
Pretty much like imgur, you enter http://something.com/image.png and the script downloads the file, then keeps it on the server and publishes it.
I tried using file_get_contents() and getimagesize(). But I'm thinking there would be problems:
how can I protect the script from 100 users supplying 100 URLs to large images?
how can I determine if the download process will take or already takes too long?
This is actually interesting.
It appears that you can actually track and control the progress of a cURL transfer. See documentation on CURLOPT_NOPROGRESS, CURLOPT_PROGRESSFUNCTION and CURLOPT_WRITEFUNCTION
I found this example and changed it to:
<?php
file_put_contents('progress.txt', '');
$target_file_name = 'targetfile.zip';
$target_file = fopen($target_file_name, 'w');
$ch = curl_init('http://localhost/so/testfile2.zip');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_NOPROGRESS, FALSE);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, 'progress_callback');
curl_setopt($ch, CURLOPT_WRITEFUNCTION, 'write_callback');
curl_exec($ch);
if ($target_file) {
fclose($target_file);
}
$_download_size = 0;
function progress_callback($download_size, $downloaded_size, $upload_size, $uploaded_size) {
global $_download_size;
$_download_size = $download_size;
static $previous_progress = 0;
if ($download_size == 0) {
$progress = 0;
}
else {
$progress = round($downloaded_size * 100 / $download_size);
}
if ($progress > $previous_progress) {
$previous_progress = $progress;
$fp = fopen('progress.txt', 'a');
fputs($fp, $progress .'% ('. $downloaded_size .'/'. $download_size .")\n");
fclose($fp);
}
}
function write_callback($ch, $data) {
global $target_file_name;
global $target_file;
global $_download_size;
if ($_download_size > 1000000) {
return '';
}
return fwrite($target_file, $data);
}
write_callback checks whether the size of the data is greater than a specified limit. If it is, it returns an empty string that aborts the transfer. I tested this on 2 files with 80K and 33M, respectively, with a 1M limit. In your case, progress_callback is pointless beyond the second line, but I kept everything in there for debugging purposes.
One other way to get the size of the data is to do a HEAD request but I don't think that servers are required to send a Content-length header.
To answer question one, you simply need to add the appropriate limits in your code. Define how many requests you want to accept in a given amount of time, track your requests in a database, and go from there. Also put a cap on file size.
For question two, you can set appropriate timeouts if you use cURL.

PHP server to server file request

I have script-1 on server A, where user ask for a file.
I have script-2 on server B (the file repository) where I check that user can access it and return the correct file (I'm using Smart File Download http://www.zubrag.com/scripts/download.php).
I've tried cURL and file_get_contents, I've changed Content Header in various ways, but I wasn't still able to download the file.
This is my request:
$request = "http://mysite.com/download.php?f=test.pdf";
and it works fine.
What should I call in script-1 to force the file be downloaded?
Some of my tries
This works, but I don't know how to handle unauthorized or broken downloads
header('Content-type: application/pdf');
$handle = fopen($request, "r");
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
This prints the pdf code (not the text) straight in the browser (I think it's a header problem):
$c = curl_init();
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_URL, $request);
$contents = curl_exec($c);
curl_close($c);
if ($contents) return $contents;
else return FALSE;
This generate a white page
file_get_contents($request);
To force download, add
header('Content-disposition: attachment');
But Note, that it's not in HTTP 1.1 spec anymore, see Uses of content-disposition in an HTTP response header first answer
Without your code I don't know what you've tried, but you need to get the contents of the file via cURL and then save it to your server. Something like...
$url = 'http://website.com/file.pdf';
$path = '/tmp/file.pdf';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$contents = curl_exec($ch);
curl_close($ch);
file_put_contents($path, $contents);
If you want to downloads file from the FTP server you can use php File Transfer Protocol (FTP) extension. Please find below code:
<?php
$SERVER_ADDRESS="";
$SERVER_USERNAME="";
$SERVER_PASSWORD="";
$conn_id = ftp_connect($SERVER_ADDRESS);
// login with username and password
$login_result = ftp_login($conn_id, $SERVER_USERNAME, $SERVER_PASSWORD);
$server_file="test.pdf" //FTP server file path
$local_file = "new.pdf"; //Local server file path
##----- DOWNLOAD $SERVER_FILE AND SAVE IT TO $LOCAL_FILE--------##
if (ftp_get($conn_id, $local_file, $server_file, FTP_BINARY)) {
echo "Successfully written to $local_file\n";
} else {
echo "There was a problem\n";
}
ftp_close($conn_id);
?>
Download the file with curl, then check this: http://php.net/function.readfile
It shows how to force download.
SOLVED
I ended by simply redirect the request with:
header("Location: $request");

Categories