So i want to download youtubevideos as mp3 to my server, so i can listen them easily as mp3.
I have some ecperience in PHP, and second to none in youtube API, but if i have understood correctly youtubevideos are loaded in chunks, and 1 of them is the mp3 soundtrack of the video.
If that is incorrect, could you advice other way to do the downloading?
To this point i have managed to do small php script to get the nessessary information to save the audio.
$id = $_POST["A"];
$xmlData = simplexml_load_string(file_get_contents("http://gdata.youtube.com/feeds/api/videos/{$id}?fields=title"));
$title = (string)$xmlData->title;
$bad = array_merge(
array_map('chr', range(0,31)),
array("<", ">", ":", '"', "/", "\\", "|", "?", "*"));
$result = str_replace($bad, "-", $title);
$file = "$result.mp3";
fopen("$file", 'w+') or die('Cannot open file: '."$file");
//echo "http://youtubeinmp3.com/fetch/?video=http://www.youtube.com/watch?v=$id";
$current = file_get_contents("$file");
//$current .= "http://youtubeinmp3.com/fetch/?video=http://www.youtube.com/watch?v=$id";
file_put_contents("$file", $current);
(post A is coming from my html part of code and it includes youtube video ID)
So, in that code i dont exactly use youtube to download the mp3 track...
so it would be great if any of you would know how to download it directly from youtube, and how to save it, because by my logick that code should perfectly work, and it does, untill the saving the audio, it saves about 50 bits and then stops, i dont get any errors or anything so its realy hard to debug.
Also if you notice any errors os better ways to do things in my code pleace notify me.
Also sorry for my bad english i m from finland, and yes sorry for crappy code, this is actualy first practical program i would be going to use :)
Youtube API doesn't provide direct links to audio or video content, only thumbnails. The only way is to parse regular page using regular expression to extract link to media. Thats how all "getfromyoutube" sites doing.
After you get content you can process it with ffmpeg or other tool. I don't think there is separate mp3 track. Normally audio track is embedded to mp4 container.
Yet it is illegal and you shouldn't doing this. e.t.c. e.t.c.
I warned you ;)
Related
There is a Word doc with a known URL. If I point my browser at it, it downloads to my computer. If I try to download it via PHP, it downloads, but with differences. In this case, it mixes up right foot and left foot (RF and LF) all over the place. What is wrong with my PHP code that it downloads a version of the Word Doc different than what my browser gets?
$wordurl = 'https://www.copperknob.co.uk/downloadsheet.aspx?StepsheetID=142209';
$saveFileName = 'sample.doc';
file_put_contents($saveFileName, file_get_contents( $wordurl ) );
Maybe the file server has some validations, User-Agent e.g.
I have a live stream from a Tenvis IP camera through http live streaming and its in mjpeg compression.
I am trying to save it to a file, and I have tried using php to do this. my code looks like this:
<?php
$input = fopen("http://xxx.xxx.xxx.xxx:81/videostream.cgi?user=user&pwd=admin&resolution=8");
$output = fopen("video.mpg", "c+");
$end = time() + 60;
do {
fwrite($output, (fread($input, 30000)), 30000);
} while (time() <= $end);
fclose($output);
fclose($input);
echo "<h1>Recording</h1>";
?>
The code I have creates the file but doesn't write anything to it. Any suggestions will be appreciated
According to the Wikipedia page about MJPEG (http://en.wikipedia.org/wiki/Motion_JPEG#M-JPEG_over_HTTP), the MJPEG stream over HTTP is basically a sequence of JPEG frames, accompanied by a special mime-type. In order to capture these and save them to a video file, I am not sure you can simply write the incoming data to an .mpg file and have a working video.
To be honest, I am not quite sure why your script does not write anything at all, but I came across the following page, which, although it is written for specific software, provides examples on how to capture an MJPEG stream and pass it on to a browser:
http://www.lavrsen.dk/foswiki/bin/view/Motion/MjpegFrameGrabPHP
You could try one of their examples, and instead of passing it to the browser, save it to a file. You can see they read one image at a time:
while (substr_count($r,"Content-Length") != 2) $r.=fread($input,512);
$start = strpos($r,'ΓΏ');
$end = strpos($r,$boundary,$start)-1;
$frame = substr("$r",$start,$end - $start);
If this does fix the stream capturing part but not saving it as a video, another option would be to save all frames individually as a JPEG file, then later stitch them together using a tool such as ffmpeg to create a video: Image sequence to video quality
Update
If you decide to take the ffmpeg road, it is also possible to capture the stream using ffmpeg only. See this question for an example.
Hope this helps.
Most of the time, when a camera supports mjpeg, it also supports rtsp and as such you might want to pursue that as a solution for what you are trying to accomplish. With that, its fairly simple to record using an app like VLC.
Am trying to create a video clip using .jpg images and ffmpeg, am creating .jpg images as below:
$str=$_REQUEST['data_array'];//gets the base64 encoded image data;
$count =$_REQUEST['count'];//gets number of images i've to create;
$edited_str=substr($str,23,strlen($str)-1);//
$edited_str=base64_decode($edited_str);
$f=fopen("Images/temp".$count.".jpg","w");//am creating temp.jpg file
fwrite($f,$edited_str);//placing my decoded data into file
fclose($f);
are the images am creating above different from normal .jpg images?
This line:
$edited_str=substr($str,23,strlen($str)-1);
makes it different. If this is the full base64 sting of the file, then this cuts it up and corrupts it. Maybe you are adding some stuff on the front.
If you are just removing stuff from the front that was added, then it should be the same as the original file that was encoded with base64.
If you want to get the information this way from another page, I suggest using $_POST as opposed to $_REQUEST for a number of reasons.
EDIT: I wouldn't say video manipulation in php is impossible. I think there is even a toolkit... here's one:
http://phpvideotoolkit.sourceforge.net/
which states:
It can perform video format conversion, extract video frames into separate image files, and assemble a video stream from a set of separate video images.
Haven't tested it, but plan to try it out one day.
EDIT2: On the php site, there were some issues that you could try, but to help more, there needs to be more information. Check with a file directly to make sure it's being sent and decrypted properly.
I haven't got to test any of these yet.
One piece of advice was for large files use:
$decodedstring=base64_decode(chunk_split($encodedstring));
Another was if you use javascript canvas.toDataURL() then you need to convert spaces back to pluses:
$encodedData = str_replace(' ','+',$encodedData);
$decocedData = base64_decode($encodedData);
http://php.net/manual/en/function.base64-decode.php
I have a photo community (www.jungledragon.com) that allows users to upload photos. My platform is PHP/CodeIgniter.
As part of the upload process I'm already reading EXIF info using PHP's exif_read_data function, which works fine. I read camera details and show these on an info tab.
On top of that, user's are expected to manually set the photo title, description and tags on the website after uploading the photo. However, some users manage these fields in their image management program, for example Lightroom. It would be great if I could read those as well, uploading would become a total joy.
I already improved my EXIF reading to read the "caption", this way users don't have to set the image title after uploading anymore. Now I'm looking to read keywords, which is where I am stuck. Here's a partial screenshot of an image in Lightroom:
I can read the Metadata, but how do I read the keywords? The fact that it is not inside metadata makes me wonder if it's at all possible? I've tried reading every value I can get (ANY_TAG, IFD0, EXIF, APP12) using exif_read_data, but the keywords are not to be found.
Any thoughts?
As suggested you may have to use another method of reading metadata.
http://www.foto-biz.com/Lightroom/Exif-vs-iptc-vs-xmp
Image keywords may be stored in IPTC and not in EXIF. I don't know if there is a standard platform method for reading iptc but a quick google shows this
http://php.net/manual/en/function.iptcparse.php
Try using PEL, a much more comprehensive library than exif_read_data() for exif data.
After a long research, i found the solution to get keywords exported by lightroom in a jpg file :
$image = getimagesize($imagepath, $info);
if(isset($info['APP13']))
{
$iptc = iptcparse($info['APP13']);
$keywordcount = count($iptc["2#025"]);
for ($i=0; $i<$keywordcount; $i++)
{
echo "keyword : " . $iptc["2#025"][$i] . "<br/>";
}
}
Okay, I have a question guys. I want to remote upload (copy an image from a site to my server) MULTIPLE images by putting links into a TEXTAREA and hitting submit. I just don't know how to make this possible with multiple images.
I am able to make it with an single image using the copy(); function, but not for multiple entries in a TEXTAREA.
I also want to limit the remote uploading feature up to 30 remote links and one image should not exceed 10MB - But I don't know how to start. I heard cURL is able to make this and I also heard that file_get_contents(); with file_put_contents(); can make a similar thing, but I still cannot figure out how to do it myself.
Help anyone? :)
You can use the same procedure as you do now with a single image, but do it in a loop.
$lines = explode("\n", $_POST['textarea']);
if(count($lines) > 30) {
die('Too many files');
}
foreach($lines as $line) {
$srcfile = trim($line);
//copy $srcfile here
//check size of the file with filesize()
}
You need to parse the URLs out of the textarea. You could with this PHP side with a regular expression.
You could then examine the parsed URLs and array_slice() the first 30, or error if more than 30.
You'd then need to copy the files from the remote server. You could inspect the Content-Length header to ensure the file is under 10mb. You could get just the headers using HEAD instead of GET.
I am not familiar with PHP but I suggest the following:
Solving the multiple files upload issue:
splitting the content in the text area by the carriage return
then iterate them to get image
preserve the size of each file in a variable, but how to get the size?
you can do exec (system) call to know the file size (this requires a full image download but its the most convenient way ), or you can make use of Content-Length header value, if the content length is more than 10 MG then skip it and move to the next item.
How to download the image?
use the file put content but make sure to put the encoding as binary encoding to preserve the content type.