Read remote mp3 file information using php - php

I am working on a site which will fetch mp3 details from a remote url. I need to write a cron job so that it gets all the song information such as the file name, path, artist, genre, bitrate, playing time, etc and put it in a database table.
I tried getid3 package, but this is very slow when fetching more than one url at a time and I get maximum execution error.
Example:
require_once('getid/getid3/getid3.php');
$urls = array ('http://stackoverflow.com/test1.mp3','http://stackoverflow.com/test2.mp3''http://stackoverflow.com/test3.mp3');
foreach($urls as $ur){
$mp3_det = getMp3Info( $ur );
print_r ($mp3_det);
}
function getMp3Info ( $url ){
if($url){
/**********/
$filename = tempnam('/tmp','getid3');
if (file_put_contents($filename, file_get_contents($url, false, null, 0, 35000))) {
if (require_once('getid/getid3/getid3.php')) {
$getID3 = new getID3;
$ThisFileInfo = $getID3->analyze($filename);
unlink($filename);
$bitratez = $ThisFileInfo[audio][bitrate] ? $ThisFileInfo[audio][bitrate] : '';
$headers = get_headers($url, 1);
if ((!array_key_exists("Content-Length", $headers))) { return false; }
// print $headers["Content-Length"];
$filesize= round($headers["Content-Length"]/1000);
$contentLengthKBITS=$filesize*8;
if ( $bitratez ){
$bitrate= round ( $bitratez/1000 );
$seconds=$contentLengthKBITS/$bitrate;
$playtime_mins = floor($seconds/60);
$playtime_secs = $seconds % 60;
if(strlen($playtime_secs)=='1'){$zero='0';}
$playtime_secs = $zero.$playtime_secs;
$playtime_string=$playtime_mins.':'.$playtime_secs;
}
else $playtime_string='0:00';
// echo '<pre>'.print_r($ThisFileInfo, true).'</pre>';
}
$bitrate = $bitrate ? $bitrate : 0;
$ret = array();
$ret['playtime'] = $playtime_string;
$ret['filesize'] = $filesize;
$ret['bitrate'] = $bitrate;
return $ret;
}
}

You may be able to help the execution time by using a socket connection and reading in chunks of the file at a time, and continuously trying to analyze the file.
Since ID3 data is stored in the beginning of the mp3, there is no point in downloading the entire thing. THe biggest problem I see right now is that the analyze function only takes a filename, not binary data (which is what you would have). So, you would have to either update the code, or make a similar function to analyze that works with your binary data.

MP3 files comes along with some kind of Meta Data almost same way to some other binary file formats. They are in ID tags. There are many versions of ID tags, like ID3 or ID4 tags. Now, there is easy way to extract IDv3 tag informations supplied along with MP3 file, through PHP.
You need to download some library in PHP from sourceforge, like getID3. This way you can extract artist name, genre, duration, length, size etc information from an mp3 file. IDv4 contains additional informations such as album art.

Related

Style & layout is not copied while creating new pptx from pptx in PHPPresentation

I want to split slides of one pptx file into seperated pptx files, containing one slide each. The content/text is copied but the layout & styling is not copied. Here is the code.
Can anyone please help ?
<?php
use PhpOffice\PhpPresentation\PhpPresentation;
use PhpOffice\PhpPresentation\IOFactory;
use PhpOffice\PhpPresentation\Style\Color;
use PhpOffice\PhpPresentation\Style\Alignment;
use PhpOffice\PhpPresentation\Slide\SlideLayout;
$objReader = \PhpOffice\PhpPresentation\IOFactory::createReader('PowerPoint2007');
$objPHPPowerPoint = $objReader->load('a.pptx');
$totalSlides = $objPHPPowerPoint->getSlideCount();
$oMasterSlide = $objPHPPowerPoint->getAllMasterSlides()[0];
$documentProperties = $objPHPPowerPoint->getDocumentProperties();
for ( $count = 0; $count < $totalSlides; $count++ ) {
$objPHPPresentation = new PhpPresentation();
$slide = $objPHPPowerPoint->getSlide( $count );
$background = $slide->getBackground();
$newSlide = $objPHPPresentation->addSlide( $slide );
$newSlide->setBackground ( $background );
$objPHPPresentation->setAllMasterSlides( $oMasterSlide );
$objPHPPresentation->removeSlideByIndex(0);
$oWriterPPTX = \PhpOffice\PhpPresentation\IOFactory::createWriter($objPHPPresentation, 'PowerPoint2007');
$oWriterPPTX->save($count.'.pptx');
}
I don't think it's an issue with your code - more an issue with the underlying libraries - as mentioned here: PhpPresentation imagecreatefromstring(): Data is not in a recognized format - PHP7.2
It ran a test to see if it was something I could replicate - and I was able to. The key difference in my test was in one presentation I had a simple background, and in the other it was a gradient.
This slide caused problems:
But this one was copied over fine:
With the more complex background I got errors like:
PHP Warning: imagecreatefromstring(): Data is not in a recognized format
My code is even less complicated than yours, I just clone the original slideshow and remove all except a single slide before saving it:
for ( $count = 0; $count < $totalSlides; $count++ ) {
$copyVersion = clone $objPHPPowerPoint;
foreach ($copyVersion->getAllSlides() as $index => $slide) {
if ($index !== $count) {
$copyVersion->removeSlideByIndex($index);
}
}
$oWriterPPTX = \PhpOffice\PhpPresentation\IOFactory::createWriter($copyVersion, 'PowerPoint2007');
$oWriterPPTX->save($count.'.pptx');
}
Sorry if this doesn't exactly solve your problem, but hopefully it can help identify why it's happening. The other answer I linked to has more information about finding unsupported images types in your slides.
You can try using Aspose.Slides Cloud SDK for PHP to split a presentation into separate slides and save them to many formats. You can evaluate this REST-based API making 150 free API calls per month for API learning and presentation processing. The following code example shows you how to split a presentation and save slides to PPTX format using Aspose.Slides Cloud:
use Aspose\Slides\Cloud\Sdk\Api\Configuration;
use Aspose\Slides\Cloud\Sdk\Api\SlidesApi;
use Aspose\Slides\Cloud\Sdk\Model;
$configuration = new Configuration();
$configuration->setAppSid("my_client_id");
$configuration->setAppKey("my_client_key");
$slidesApi = new SlidesApi(null, $configuration);
$filePath = "example.pptx";
// Upload the file to the default storage.
$fileStream = fopen($filePath, 'r');
$slidesApi->uploadFile($filePath, $fileStream);
// Split the file and save the slides in PPTX format in the same folder.
$response = $slidesApi->split($filePath, null, Model\SlideExportFormat::PPTX);
// Download files of the slides.
foreach($response->getSlides() as $slide) {
$slideFilePath = pathinfo($slide->getHref())["basename"];
$slideFile = $slidesApi->downloadFile($slideFilePath);
echo $slideFile->getRealPath(), "\r\n";
}
Sometimes it is necessary to split a presentation without using any code. In this case, you can use Online PowerPoint Splitter.
I work as a Support Developer at Aspose.

php scraper scripts need to be changed

this script harvests links out of a seed url and only prints them in command shell (or browser) rather than saving elsewhere. I want the script to store any outputs in .txt file within the folder where the script resides. I need suggestions what could be the efficient way to do that. Please give me hints.
<?php
# Initialization
include("LIB_http.php"); // http library
include("LIB_parse.php"); // parse library
include("LIB_resolve_addresses.php"); // address resolution library
include("LIB_exclusion_list.php"); // list of excluded keywords
include("LIB_simple_spider.php"); // spider routines used by this app.
set_time_limit(3600); // Don't let PHP timeout
$SEED_URL = "http://www.schrenk.com"; // First URL spider downloads
$MAX_PENETRATION = 1; // Set spider penetration depth
$FETCH_DELAY = 1; // Wait one second between page fetches
$ALLOW_OFFISTE = false; // Don't allow spider to roam from the SEED_URL's domain
$spider_array = array();
# Get links from $SEED_URL
echo "Harvesting Seed URL \n";
$temp_link_array = harvest_links($SEED_URL);
$spider_array = archive_links($spider_array, 0, $temp_link_array);
# Spider links in remaining penetration levels
for($penetration_level=1; $penetration_level<=$MAX_PENETRATION; $penetration_level++)
{
$previous_level = $penetration_level - 1;
for($xx=0; $xx<count($spider_array[$previous_level]); $xx++)
{
unset($temp_link_array);
$temp_link_array = harvest_links($spider_array[$previous_level][$xx]);
echo "Level=$penetration_level, xx=$xx of ".count($spider_array[$previous_level])." <br>\n";
$spider_array = archive_links($spider_array, $penetration_level, $temp_link_array);
}
}
?>
Use file_put_contents PHP function with enable append file flag.
$file = 'file_name.txt';
file_put_contents($file, $text_to_write_to_file, FILE_APPEND);
Ref: http://www.php.net/manual/en/function.file-put-contents.php
I would recommend first creating a variable to store the output in the script. So at the top (under the $spider_array=array() ) add:
$output = "";
The change all the lines with echo to be $output .=
This will store all the content sent to the screen or the browser into the $output variable.
Now at the bottom of the script, after everything has been scraped and the spider is finished, save the output to a file:
$filename = date('Y_m_d_H_i_s') . '.txt';
$filepath = dirname(__FILE__);
file_put_contents($filepath . '/' . $filename, $output);
This should save the output in a file within the same folder as the script with a date/time file name. (This code was written using examples from php.net, exact implementation may need a bit of debugging, but this should get you close enough.

Crunch lots of files to generate stats file

I have a bunch of files I need to crunch and I'm worrying about scalability and speed.
The filename and filedata(only the first line) is stored into an array in RAM to create some statical files later in the script.
The files must remain files and can't be put into a databases.
The filename are formatted in the following fashion :
Y-M-D-title.ext (where Y is Year, M for Month and D for Day)
I'm actually using glob to list all the files and create my array :
Here is a sample of the code creating the array "for year" or "month" (It's used in a function with only one parameter -> $period)
[...]
function create_data_info($period=NULL){
$data = array();
$files = glob(ROOT_DIR.'/'.'*.ext');
$size = sizeOf($files);
$existing_title = array(); //Used so we can handle having the same titles two times at different date.
if (isSet($period)){
if ( "year" === $period ){
for ($i = 0; $i < $size; $i++) {
$info = extract_info($files[$i], $existing_file);
//Create the data array with all the data ordered by year/month/day
$data[(int)$info[5]][] = $info;
unset($info);
}
}elseif ( "month" === $period ){
for ($i = 0; $i < $size; $i++) {
$info = extract_info($files[$i], $existing_file);
$key = $info[5].$info[6];
//Create the data array with all the data ordered by year/month/day
$data[(int)$key][] = $info;
unset($info);
}
}
}
[...]
}
function extract_info($file, &$existing){
$full_path_file = $file;
$file = basename($file);
$info_file = explode("-", $file, 4);
$filetitle = explode(".", $info_file[3]);
$info[0] = $filetitle[0];
if (!isSet($existing[$info[0]]))
$existing[$info[0]] = -1;
$existing[$info[0]] += 1;
if ($existing[$info[0]] > 0)
//We have already found a post with this title
//the creation of the cache is based on info[4] data for the filename
//so we need to tune it
$info[0] = $info[0]."-".$existing[$info[0]];
$info[1] = $info_file[3];
$info[2] = $full_path_file;
$post_content = file(ROOT_DIR.'/'.$file, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
$info[3] = $post_content[0]; //first line of the files
unset($post_content);
$info[4] = filemtime(ROOT_DIR.'/'.$file);
$info[5] = $info_file[0]; //year
$info[6] = $info_file[1]; //month
$info[7] = $info_file[2]; //day
return $info;
}
So in my script I only call create_data_info(PERIOD) (PERIOD being "year", "month", etc..)
It returns an array filled with the info I need, and then I can loop throught it to create my statistics files.
This process is done everytime the PHP script is launched.
My question is : is this code optimal (certainly not) and what can I do to squeeze some juice from my code ?
I don't know how I can cache this (even if it's possible), as there is a lot of I/O involved.
I can change the tree structure if it could change things compared to a flat structure, but from what I found out with my tests it seems flat is the best.
I already thought about creating a little "booster" in C doing only the crunching, but I since it's I/O bound, I don't think it would make a huge difference and the application would be a lot less compatible for shared hosting users.
Thank you very much for your input, I hope I was clear enough here. Let me know if you need clarification (and forget my english mistakes).
To begin with you should use DirectoryIterator instead of glob function. When it comes to scandir vs opendir vs glob, glob is as slow as it gets.
Also, when you are dealing with a large amount of files you should try to do all your processing inside one loop, php function calls are rather slow.
I see you are using unset($info); yet in every loop you make, $info gets new value. Php does its own garbage collection, if thats your concern. Unset is a language construct not a function and should be pretty fast, but when using not needed, it still makes whole thing a bit slower.
You are passing $existing as a reference. Is there practical outcome for this? In my experience references make things slower.
And at last your script seems to deal with a lot of string processing. You might want to consider somekind of "serialize data and base64 encode/decode" solution, but you should benchmark that specifically, might be faster, might be slower depenging on your whole code. (My idea is that, serialize/unserialize MIGHT run faster as these are native php functions and custom functions with string processing are slower).
My answer was not very I/O related but I hope it was helpful.

How to merge docx documents in PHP?

Does anyone know how to merge (concatenate) docx documents with PHP (or Python if it's not possible in PHP)?
To clarify, my server is Linux based. I have 2 existing docx document, I need to put them in a new docx document using PHP or possibly Python.
Merging two different Docx files may be very complicated because Headers, Styles, Charts, Comments, User Modification Traces and other special contents are saved in separate inner XML sub-files in each Docx. Thus, two Docx may have different objects having the same ids. So it would be a very huge job to list all possible objects in the two documents, give them new inner ids, and re-affect them in a single one. Probably only Ms Office can do this currently.
Nevertheless, if you know that your two documents to be merged have the same styles, and if you know you have no charts, headers and other special objects, then the merging becomes something quite easy to perform.
In this case, you only have to use a Zip reader, such as TbsZip, to open the first Docx file (which is technically a zip archive containing XML sub-files) ; then read the sub-file "word/document.xml" and extract the part which is between the tags < w:body >
and < /w:body >.
In the second Docx file, open the "word/content.xml" and insert the previous content just before the tag < /w:body >. Save the result in a new Docx file.
This can be done using TbsZip, like this :
<?php
include_once('tbszip.php');
$zip = new clsTbsZip();
// Open the first document
$zip->Open('doc1.docx');
$content1 = $zip->FileRead('word/document.xml');
$zip->Close();
// Extract the content of the first document
$p = strpos($content1, '<w:body');
if ($p===false) exit("Tag <w:body> not found in document 1.");
$p = strpos($content1, '>', $p);
$content1 = substr($content1, $p+1);
$p = strpos($content1, '</w:body>');
if ($p===false) exit("Tag </w:body> not found in document 1.");
$content1 = substr($content1, 0, $p);
// Insert into the second document
$zip->Open('doc2.docx');
$content2 = $zip->FileRead('word/document.xml');
$p = strpos($content2, '</w:body>');
if ($p===false) exit("Tag </w:body> not found in document 2.");
$content2 = substr_replace($content2, $content1, $p, 0);
$zip->FileReplace('word/document.xml', $content2, TBSZIP_STRING);
// Save the merge into a third file
$zip->Flush(TBSZIP_FILE, 'merge.docx');
You may merge two Word documents with PHPDocX with a single line of code: (Source: Merging Word documents with PHPDocX)
require_once 'path /classes/DocxUtilities.inc';
$newDocx = new DocxUtilities();
$myOptions = array('mergeType' => 0);
$newDocx->mergeDocx('firstWordDoc.docx', 'secondWordDoc.docx', 'mergedWord.docx',
$myOptions);
This merging let you preserve all section structure (paper size, margins, associated footers and headers,...), includes all the required styles, manages all lists (this may seem trivial but it is not so in the OOXML standard), preserves images and charts as well as footnotes, endnotes and comments.
Moreover there is an option to preserve the original numberings (by default the page numbering continues).
One also may, via the mergeType option, to discard the section structure of the merged document and add it at the end of the first document as part of its last section. In this case, of course, the headers and footers are not imported but all other elements are still preserved.
Aspose.Words Cloud SDK for PHP can merge/join several Word Documents into a one Word document while keeping the formatting of appended or destination document depending upon the ImportFormatMode parameter value. Secondly, it is a commercial API but the free pricing plan allows 150 free monthly API Calls.
<?php
require_once('D:\xampp\htdocs\aspose-words-cloud-php-master\vendor\autoload.php');
//TODO: Get your ClientId and ClientSecret at https://dashboard.aspose.cloud (free registration is required).
$ClientSecret="xxxxxxxxxxxxxxxxxxxxxxxxxxxx";
$ClientId="xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";
$wordsApi = new Aspose\Words\WordsApi($ClientId,$ClientSecret);
try {
$remoteDataFolder = "Temp";
$localFile = "C:/Temp/02_pages_adobe.docx";
$remoteFileName = "02_pages_adobe.docx";
$localFile1 = "C:/Temp/Sections.docx";
$remoteFileName1 = "Sections.docx";
$outputFileName = "TestAppendDocument.docx";
$uploadRequest = new Aspose\Words\Model\Requests\UploadFileRequest($localFile,$remoteDataFolder."/".$remoteFileName,null);
$wordsApi->uploadFile($uploadRequest);
$uploadRequest1 = new Aspose\Words\Model\Requests\UploadFileRequest($localFile1,$remoteDataFolder."/".$remoteFileName1,null);
$wordsApi->uploadFile($uploadRequest1);
$requestDocumentListDocumentEntries0 = new Aspose\Words\Model\DocumentEntry(array(
"href" => $remoteDataFolder . "/" . $remoteFileName1,
"import_format_mode" => "KeepSourceFormatting",
));
$requestDocumentListDocumentEntries = [
$requestDocumentListDocumentEntries0,
];
$requestDocumentList = new Aspose\Words\Model\DocumentEntryList(array(
"document_entries" => $requestDocumentListDocumentEntries,
));
$request = new Aspose\Words\Model\Requests\AppendDocumentRequest(
$remoteFileName,
$requestDocumentList,
$remoteDataFolder,
NULL,
NULL,
NULL,
$remoteDataFolder . "/" . $outputFileName,
NULL,
NULL
);
$result = $wordsApi->appendDocument($request);
##Download file
$request = new Aspose\Words\Model\Requests\DownloadFileRequest($remoteDataFolder."/".$outputFileName,NULL,NULL);
$result = $wordsApi->downloadFile($request);
copy($result->getPathName(),"AppendOutput.docx");
} catch (Exception $e) {
echo "Something went wrong: ", $e->getMessage(), "\n";
PHP_EOL;
}
?>
P.S: I'm developer evangelist at Aspose.

Small help saving to txt file

Hello there so I just setup this basic poll, I inspired myself from something I found out there, and it's just a basic ajax poll that waves the results in a text file.
Although I was wondering, since I do not want the user to simply mass-click to advantage / disadvantage the results, i thought about adding a new text file that could simply save the IP, one on each line, and then checks if it's already logged, if yes, display the results, if not, show the poll.
My lines of code to save the result are:
<?php
$vote = $_REQUEST['vote'];
$filename = "votes.txt";
$content = file($filename);
$array = explode("-", $content[0]);
$yes = $array[0];
$no = $array[1];
if ($vote == 0)
{
$yes = $yes + 1;
}
if ($vote == 1)
{
$no = $no + 1;
}
$insert = $yes."-".$no;
$fp = fopen($filename,"w");
fputs($fp,$insert);
fclose($fp);
?>
So I'd like to know how I could check out the IPs, in the same way it does basically.
And I'm not interested in database, even for security measures, I'm alright with what Ive got.
Thanks to any help!
To stop multiple votes, I'd set a cookie once a user has voted. If the user reloads the page with the voting form on it and has a cookie, you could show just the results, or a "You have already voted." message. Note that this will not stop craftier people from double-voting - all they would have to do is remove the saved cookie, and they could re-vote.
Keep in mind though that IPs can be shared so your idea of storing IPs might backfire - people on a shared external-facing IP won't be able to vote, as your system will have registered a previous vote from someone at the same IP address.
easiest way is to write data to file is
file_put_contents($filename, $data)
and to read data from file
file_get_contents($filename);
To get IP Address of the user
$_SERVER['REMOTE_ADDR']
See php manual for file_put_contents for more information and file_get_contents
Here is sample code
<?php
// File path
$file = 'votedips.txt';
// Get User's IP Address
$ip = $_SERVER['REMOTE_ADDR'];
// Get data from file (if it exists) or initialize to empty string
$votedIps = file_exists($file) ? file_get_contents($file) : '';
//
$ips = explode("\n", $votedIps);
if (array_search($ip, $ips)) {
// USER VOTED
} else {
$ips[] = $ip;
}
// Write data to file
$data = implode("\n", $ips);
file_put_contents($file, $data);
?>
You can use file_get_contents to save the file's content into a variable and then use the strpos function to check if the IP exists in that variable.
For example:
$ipfile=file_get_contents('ip.txt');
if (strpos($ipfile, $_SERVER['REMOTE_ADDR'])!==FALSE) // show the results
else // show the poll
Be careful with storing IPs in a text file, and then using file_get_contents() and similar functions for loading the data/parseing. As an absolute worst case, assuming that every possible IP address used your system to vote, you'd end up with a text file in the many many gigabytes in size, and you'd exceed PHP's memory_limit very quickly.

Categories