What is the best method and player for giving an audio preview on an OpenCart store. This would involve uploading the full track and then extracting a portion to be played
m3psplt is by far your best bet.
It can sometimes be a little dicey to install (particularly on CentOS, other RH based distros) but it's really the only solution I've found.
I usually run a script that analyzes the mp3 with getid3 to get the length, then I calculate the halfway point of the mp3, and pass that plus thirty seconds to mp3splt via the exec command to mp3splt.
It works great when you can get it to install properly. If you're on debian/ubuntu it's actually a cinch to install via aptitude.
The only other thing I could think do do would be to wrap your command line unix audio editing utilities in a php script to basically create a "grab 2 minute head of MP3" function, then run that on files when they are uploaded. then yes, save them in a "previews" area of the file system and store the filename in a DB table for later reference.
I've found a PHP script that could fit your needs (please note I didn't tested it). You can find it here. The class interface seems simple and functional. Anyway, you will need to modify your OpenCart product template to expose the preview command.
Related
I need to add some custom fonts to TCPDF library, yet after surfing internet for hours, I couldn't come up with a nice working solution.
Generally two main ways are offered to create new fonts for TCPDF library. One is using online websites doing the conversion; the other is using tcpdf_addfont.php and more specifically addTTFfont method.
With the first way, there is a great stumbling block as the most famous website doing the job is now down: that is fonts.snm-portal. In fact, it is not down, yet does not do the previous conversion task any longer. The second website that is xml-convert just produces .php and .z files and totally ignores .ctg.z. I guess, my custom files are not being recognized without this .ctg.z extension available.
For the second way, I really couldn't do anything special, as I don't know much about terminals. I just opened the path and copied the code:
./tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i blah.ttf
yet, nothing special happened. This code just opened up the tcpdf_addfont.php file in the IDE, and no font files were created in the fonts folder, as it is supposed to do; There is something wrong here with this command line, as when looking inside the php file, one can make sure that in case everything goes fine, there must at least be some echoing, whether to show an error, or to confirm that the files have been created. Yet, nothing shows up and instead running the code in Powershell opens up tcpdf_addfont.php in my php IDE. That's the whole story. With this introduction in mind, may you please let me know how I can get .ctg.z file for the custom fonts, whether via the first method or the second. Any help would be welcome.
THANKS in advance.
The thing is that the client wants to upload a pdf with images as a way of batch processing multiple images at once.
I already looked around and out of the box PHP can't read PDF's.
What are my alternatives?
I already know the host has not installed imageMagick or any pdf library and the exec function is disabled. That's basicly leaving me with nothing to work with, I guess?
Does anyone know if there is an online service that can do this, with an api of sorts?
thanks in adv
AFAIK, there is no PHP module to do it. There is a command line tool, pdfimages (part of xpdf). For reference, here's how that works:
pdfimages -j source.pdf image
Which will extract all images from source.pdf as image-000.jpg, image-001.jpg, etc. Note the output format is always Jpeg.
Possible Options
Being a command line tool, you need exec (or system, passthru, any of the command executing functions built into PHP). As your environment doesn't have that, I see four options:
Beg that exec be turned on for you (your hosting provider can limit what you can exec to a single command)
Change the design -- how about a ZIP upload?
Roll your own, using the source code of pdfimages as a model
Let pdfimages do the heavy lifting, by running it on a remote host you do control
Regarding #3, rolling your own, I don't think rolling your own, to solve a very narrow definition of requirements, would be too difficult. I seem to recall that the image boundaries in PDF are well defined: just read in the file to a boundary, cut to the end of the boundary, base64_decode, and write to a file -- repeat. However, that may be too much...
If rolling your own is too complicated, then option #4 is kind of like what Joel Spolsky describes for working with complicated Excel objects (see the numbered list under the bold heading "Let Office do the heavy work for you").
Find a cheap hosting environment (eg Amazon EC2) that let's you exec and curl
Install pdfimages
Write a PHP script that takes a URL to a PDF, curl opens that PDF, writes it to disk, passes it to pdfimages, then returns the URL to the resulting images.
An example exchange could look like this:
GET http://www.cheaphost.com/pdfimages.php?extract=http://www.limitedhost.com/path/to/uploaded.pdf
Content-type: text/html
<html>
<body>
<ul>
<li>http://www.cheaphost.com/pdfimages.php?retrieve=ab9895v/image-000.jpg</li>
<li>http://www.cheaphost.com/pdfimages.php?retrieve=ab9895v/image-001.jpg</li>
</ul>
</body>
</html>
So your single pdfimages.php script (running on the host with the exec functionality) can both extract images, and give you access to the extracted images. When extracting, it reads a PDF you tell it, runs pdfimages on it, and gives you back a list of URL to call to retrieve the extracted images. When retrieving, it just gives you back a straight image.
You would need to deal with cleanup, perhaps the thing to do would be to delete the image after retrieval. You would also need to handle security -- don't know what's in these images, but the content might need to be wrapped in SSL and other precautions taken.
You can use pdfimages and install it this way:
apt install poppler-utils
Then use it this way to get all the images as PNG files:
pdfimages -j mypdf.pdf image -png
Images will be placed in the same folder under image-000.png, image-001.png, etc.
There are many options available, including some to change the output format, more information here.
I hope this helps!
I am trying to display the images in the pdf document that I uploaded to the server as hyperlinks in php so that if user clicks on them they will get the corresponding document.
Please help me ,Thanks in advance!
Use pdfimages, which comes with the open-source xpdf software package (for *nix operating systems). You'll have to call it through exec or the like, then work with the output from PHP. I am not aware of any PHP library that provides this functionality, so you're going to have to experiment.
EDIT
You mentioned that you aren't experienced with PHP... I thought I'd add that this isn't a quick-and-easy type of task, you certainly aren't going to find a bunch of tutorials around the internet for this.
To get started, you'll have to install the xpdf package on your server. There's a lot of different ways to do this depending on which OS you've got.
After that is set up, you'll be using a command line to execute a program on your server; you'll want to capture the output of that command in PHP and work with it further. So initially, you'll want to work out exactly what your command line will look like as well as what the output looks like and means - do this from command line, don't worry about the PHP part yet. In this case, your output is going to be a list of the image files extracted from a given PDF, you're command line call will look something like "pdfimages mypdf.pdf". Play around, find out what happens.
After you work out exactly what command line you need to send and what the command does, you can focus on the PHP angle. In a nutt shell, you want PHP to execute the exact command that you've already worked out. Look at the manual for exec for information on how to call a command line and get the output back. Write your script to make the correct call and show the call's output.
Next, move on to doing something with that output. I presume you'll want to somehow store the extracted images in a web-accessible place, put them in the database, show them to the user, etc. That is the very last stage after you've worked out the initial steps.
Good luck!
Could someone please advise on what my options are when it comes to video type conversion in PHP. I have just discovered that our system uses something called ffmpeg. This isn't a problem but when a website is transferred it does create a problem as this absolute command breaks websites.
system ('/usr/bin/ffmpeg -i '.$video.' -y -f flv -qmin 5 -qmax 9 -ar 22050 '.DATA_DIR . $new_filename);
As you can see, a transferred website would require to have this path on their host and most don't.
So the question is this. I need to replace this. Is there some sort of PHP script or API that will make this work?
Is there any option other than pinging our own servers with the video and our video sending back the video in the new format?
Thanks.
Is there some sort of PHP script or API that will make this work?
No. This is well beyond the scope of PHP. FFMPeg is indeed the household name for video conversion - the best thing is probably to stick with that.
One workaround would be to set up a conversion service script on a server that supports ffmpeg, and all the other web sites sending the material to that server (if file sizes and traffic rates allow.)
There is a php ffmpeg library, but you can just install linux version of ffmpeg in your application and change this directory
No, there are no native PHP alternatives to ffmpeg for transcoding videos, so you must work around that somehow.
As mentioned before, there is no PHP extension that does video conversion (the ffmpeg-php extension can not convert videos) - you will have to call something not in PHP to get the video conversion proper done.
I see two possible problems on the "transferred websites":
If it is simply a path problem: look at this page for how to call ffmpeg - you should not have to include the "/usr/bin/" part in your command.
If the problem is that you cannot install ffmpeg on the transferred websites, you can do two things, depending on which drawback is more acceptable:
You may convert all videos to .flv beforehand, and serve them either from the transferred websites or from your own servers. Use that method for videos that will be watched often, or whose converted version will be watched often.
The transferred websites will point to the video flux from your own servers, that will handle the on-the-fly conversion. Do that for videos that will not be watched as often.
Feel free to install ffmpeg into your home directory on your hosting provider; many, if not most, hosts allow you to install programs in addition to scripts.
However, please do not place this code on a production system. Or, any computer you care about. If some smartass uploads a video named
Puppy;/bin/rm -rf /;.avi
then you can kiss all your data goodbye. If it is named:
Puppy;`nc -l 11111`;.avi
then they have a shell they can use for whatever they please.
I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.