When I try to upload videos captured from my iPhone in my app, the server performs a conversion from .mov to .mp4 so that it can be played in other platforms. However the problem is that when I shoot the video (in portrait orientation) and it is converted (using ffmpeg) and then played back from the server, it appears to be rotated. Any idea?
FFMPEG changed the default behavior to auto rotate video sources with rotation metadata in 2015. This was released as v2.7.
If your ffmpeg version is v2.7 or newer, but your rotation metadata isn't respected, the problem is likely that you are using custom rotation based on metadata. This will cause the same logic to be applied twice, changing or cancelling out the rotation.
In addition to removing your custom rotation (recommended), there's an option to turn off auto rotation with -noautorotate.
ffmpeg -noautorotate -i input.mp4...
This will also work in some older releases.
For sake of completeness, the reason this is happening is that iPhones only actually capture video in one fixed orientation. The measured orientation is then recorded in Apple-specific metadata.
The effect is that Quicktime Player reads the metadata and rotates the video to the correct orientation during playback, but other software (e.g., VLC) does not and shows it as oriented in the actual codec data.
This is why rotate=90 (or vflip, or transpose, or etc.) will work for some people, but not others. Depending on how the camera is held during recording, the rotation necessary could be 90, 180, or even 270 degrees. Without reading the metadata, you're just guessing at how much rotation is necessary and the change that fixes one video will fail for another.
What you can also do is remove the QuickTime specific metadata when rotate the .mov.
This will make sure that the video is rotated the same way in VLC and QuickTime
ffmpeg -i in.mov -vf "transpose=1" -metadata:s:v:0 rotate=0 out.mov
Here's the documentation on the -metadata option (from http://ffmpeg.org/ffmpeg.html):
-metadata[:metadata_specifier] key=value (output,per-metadata)
Set a metadata key/value pair.
An optional metadata_specifier may be given to set metadata on streams or chapters. See -map_metadata documentation for details.
This option overrides metadata set with -map_metadata. It is also possible to delete metadata by using an empty value.
For example, for setting the title in the output file:
ffmpeg -i in.avi -metadata title="my title" out.flv
To set the language of the first audio stream:
ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
Depending on which version of ffmpeg you have and how it's compiled, one of the following should work...
ffmpeg -vf "transpose=1" -i input.mov output.mp4
...or...
ffmpeg -vfilters "rotate=90" -i input.mov output.mp4
Use the vflip filter
ffmpeg -i input.mov -vf "vflip" output.mp4
Rotate did not work for me and transpose=1 was rotating 90 degrees
So - I too ran into this issue, and here my $0.02 on it:
1.) some videos DO have Orientation/Rotation metadata, some don't:
MTS (sony AVHCD) or the AVIs I have - DO NOT have an orientation tag.
MOVs and MP4s (ipad/iphone or samsung galaxy note2) DO HAVE it.
you can check the setting via 'exiftool -Rotation file'.
My videos often have 90 or 180 as the rotation.
2.) ffmpeg - regardless of the man-page with the metadata-tag, just doesn't EVER seem to set it in the output file. - the rotation-tag is ALWAYS '0'.
it correctly reports it in the output - but it's never set right to be reported by exiftool. - But hey - at least it's there and always 0.
3.) rotation angles:
if you want rotate +/- 90: transpose=1 for clockwise 90, 2 ccw
now if you need 180 degree - just add this filter TWICE.
remember - it's a filter-chain you specify. :-) - see further down.
4.) rotate then scale:
this is tricky - because you quickly get into MP4 output format violations.
Let's say you have a 1920x1080 MOV.
rotate by 90 gives 1080x1920
then we rescale to -1:720 -> 1080*(720/1920) = 405 horiz
And 405 horizontal is NOT divisable by 2 - ERROR. fix this manually.
FIXING THIS automatically - requires a bit of shell-script work.
5.) scale then rotate:
you could do it this way - but then you end up with 720x1280. yuck.
But the filter-example here would be:
"-vf yadif=1,scale=-1:720,transpose=1"
It's just not what I want - but could work quite OK.
Putting it all together: - NOTE - 'intentionally WRONG Rotation-tag', just to demonstrate - it won't show up AT ALL in the output !
This will take the input - and rotate it by 180 degree, THEN RESCALE IT - resetting the rotation-tag. - typically iphone/ipad2 can create 180deg rotated material.
you just can leave '-metadata Rotation=x' out the line...
/usr/bin/ffmpeg -i input-movie.mov -timestamp 2012-06-23 08:58:10 -map_metadata 0:0 -metadata Rotation=270 -sws_flags lanczos -vcodec libx264 -x264opts me=umh -b 2600k -vf yadif=1,transpose=1,transpose=1,scale=1280:720 -f mp4 -y output-movie.MP4
I have multiple devices - like a settop box, ipad2, note2, and I convert ALL my input-material (regardless whether it's mp4,mov,MTS,AVI) to 720p mp4, and till now ALL the resulting videos play correct (orientation,sound) on every dev.
Hope it helps.
For including into web pages my portrait-format videos from iPhone, I just discovered the following recipe for getting .mp4 files in portrait display.
Step 1: In QuickTime Player, Export your file to 480p (I assume that 720p or 1080p would work as well). You get a .mov file again.
Step 2: Take the new file in QT Player, and export to “iPad, iPhone…”. You get a .m4v file.
Step 3: I’m using Miro Video Converter, but probably any readily-available converter at all will work, to get your .mp4 file.
Works like a (long-winded) charm.
I've filmed the video with Ipad3 and it was oriented upside down, which I suppose is the common situation of all Apple devices at some versions. Besides of it, the 3-minutes long MOV file (1920x1090) took about 500 Mb in size, which made it not available to share easily. I had to convert it to MP4, and analyzing all threads I've found on stackoverflow, here's the final code string for ffmpeg I've used (ffmpeg ver. 2.8.4):
ffmpeg -i IN.MOV -s 960x540 -metadata:s:v rotate="0" -acodec libmp3lame OUT.mp4
I suppose you may just leave '-metadata:s:v rotate="0"' if you don't need the resize and audio codec change. Note that if you resize the video, width and height should fully divide to 4.
Although the topic is old.
Hope this will help some one:
Get ffmpeg latest version : https://www.ffmpeg.org/download.html
The command that worked for me (to flip 180 degrees):
ffmpeg -noautorotate -i input.mp4 -filter:v "rotate=PI" output.mp4
When the degrees are determined by -filter:v "PI/180*degrees"
for example
-filter:v "45*PI/180" for 45 degrees
A nice explanation is here
https://superuser.com/questions/578321/how-to-rotate-a-video-180-with-ffmpeg
Or... to simply change the tag in an existing file:
Read the current rotation
exiftool -Rotation <file>
then, for example:
exiftool -Rotation=180 <file>
to set it to 180
Related
I have got some PDF files i need to crop (crop to trimbox etc), which I can do with the following command
convert -define pdf:use-trimbox=true -density 300 original.pdf outcome.pdf
It does the job however the outcome.pdf quality if not as sharp as original PDF. When I crop them on my desktop software (Acrobat Pro) the result it same quality but in ImageMagick I can not keep the same quality in the outcome.
My question is how can i crop a pdf page without compromising from the quality?
i have been searching and trying different settings for weeks but not been succesfull.
Most likely the problem is that ImageMagick is having the PDF rendered to a bitmap by Ghostscript, and then exporting the bitmap wrapped up in a PDF file. Without seeing the original I can't say for sure, but if the original contained JPEG images, then most likely you are ending up with JPEG being applied twice, or simply rendering at all is causing the problem.
Your best bet is going to be to use a tool which can simply apply a CropBox to the page(s). You can do this with Ghostscript, for example (which may also modify the PDF in other ways, including the double JPEG quantisation, so beware).
gs -sDEVICE=pdfwrite \
-sOutputFile=cropped.pdf \
-dBATCH -dNOPAUSE \
-c "<</ColorImageFilter /FlateEncode>> setdistillerparams" \
-f <input.pdf> \
-c "[ /CropBox [ 0 0 100 100] /PAGES pdfmark" \
-f
The first section between -c and -f tells the pdfwrite device to use FlateEncode for colour images, the default is JPEG, using Flate will ensure you don't get quantisation applied twice.
The second section between -c and -f tells the pdfwrite device to write a CropBox to the file and to make it 0,0 to 100,100. The units are the usual units in PDF; 1/72 inch, you can use fractional values.
I'm sure there are other tools which will do this, possible even more easily.
Have you tryed to increase the density? That's the purpose:
http://www.imagemagick.org/script/command-line-options.php#density
Otherwise try:
-quality 100
From:
Convert PDF to image with high resolution
So, i have kind of accepted this task on work but im really not sure if its possible.
We are going to build a website where users can upload videos from their computers and mobile phone browsers. The video files can be a large range of aspect ratios, width, height, codex and file formats.
I will have access to ffmpeg from php exec command on a web server.
Is it possible to use this to convert the user files to one file format that works on computers, android and iphone.
The requirements is that we can set a max width, to witch the video will be scaled, dynamically to match height.
Does anyone know is this can be done, and be done in a reasonable amount of time. Will do project on 2 days. And if so some pointers in the right direction would be nice.
Had the same problem but solved by using HandBrake the open source video transcoder
https://handbrake.fr/
If your target can only be one file format, then I would choose mp4 baseline. (However some browsers won't play it, which is why the html tag offers multiple source flags, which usually include webm and ogg video...)
Using ffprobe -show_streams $uploadedFile you can get the dimensions (and aspect ratio) of the file. Using math you can get the new dimensions based on your needs.
$newDim=$new_width.":".$new_height;
$output = shell_exec("/usr/bin/ffmpeg -i $uploadedFile -f mp4 \
-c:a libfdk_aac -b:a 128k -c:v libx264 -vprofile baseline \
-movflags faststart -vf scale=$newDim $output");```
Here is the breakdown:
f mp4 > format mp4
c:a libfdk_aac > audio codec
c:v libx264 > video codec
vprofile baseline > minimal codec usage for mobile
movflags faststart > put the moov atom at the beginning of file
$output > should have '.mp4' as a file ending
Of course the devil is in the details (and the number of processing cores you can throw at an online converter), but this will get you up and running at least.
Edit: Actually answered the question. :)
By the way, ffmpeg does offer the vf flag: -vf scale=320,-1, but sometimes it gives you a dimension not divisible by 2 which throws an error in x264 encoding. Its better to do the math yourself.
I am using php ffmpeg in a laravel project, to do multiple things probe, extract frame and encode. I am having an issue when creating a frame from the uploaded video file.
This is how the frame is created:
$video = $ffmpeg->open($destinationPath.'/'.$filename);
$video
->frame(FFMpeg\Coordinate\TimeCode::fromSeconds(10))
->save(public_path().$frame_path);
This is sometimes working and creates the frame but other times is not. I noticed that this bug comes up when I am trying to open a .mov file.
It's possible that your version of ffmpeg does not support the codec that is used in the source video file, and hence it is not able to decompress the video and extract an image.
You could try processing the file from the command line to see if you can extract an image that way, and ffmpeg may give you some more information on the problem.
An example command line to extract a png frame from a video file
ffmpeg -y -ss 30 -i [source_file] -vframes 1 [target_file]
Add -f image2 as an output option if your output name is a variable.
The PHP-FFMpeg library appends the -ss argument by default before the input file which requires the timestamp to be accurate in order to obtain the frame. I encountered this problem in case of mkv file. Files such as mkv and mov can not be accurately seeked.
https://github.com/PHP-FFMpeg/PHP-FFMpeg/blob/master/src/FFMpeg/Media/Frame.php#L79
You need to pass true as the second argument to the save function in order to give a Frame closest to the given point. It changes the position of -ss argument in ffmpeg command.
-ss position (input/output)
When used as an input option (before -i), seeks in this input file to position. Note that in most formats it is
not possible to seek exactly, so ffmpeg will seek to the closest seek
point before position. When transcoding and -accurate_seek is enabled
(the default), this extra segment between the seek point and position
will be decoded and discarded. When doing stream copy or when
-noaccurate_seek is used, it will be preserved.
When used as an output option (before an output filename), decodes but
discards input until the timestamps reach position.
position must be a time duration specification, see (ffmpeg-utils)the
Time duration section in the ffmpeg-utils(1) manual.
Here is the code I've been using with PHP:
https://totaldev.com/extract-image-frame-video-php-ffmpeg/
<?php
// Full path to ffmpeg (make sure this binary has execute permission for PHP)
$ffmpeg = "/full/path/to/ffmpeg";
// Full path to the video file
$videoFile = "/full/path/to/video.mp4";
// Full path to output image file (make sure the containing folder has write permissions!)
$imgOut = "/full/path/to/frame.jpg";
// Number of seconds into the video to extract the frame
$second = 0;
// Setup the command to get the frame image
$cmd = $ffmpeg." -i \"".$videoFile."\" -an -ss ".$second.".001 -y -f mjpeg \"".$imgOut."\" 2>&1";
// Get any feedback from the command
$feedback = `$cmd`;
// Use $imgOut (the extracted frame) however you need to
// ...
i am trying to use the code bellow to merge an audio file at specific time (6th second of my input video) and create a new video autput file but i cant make it work.
<?php
exec("/usr/local/bin/ffmpeg -sameq -i /home/xxx/public_html/xxx/video/full.mp4 -itsoffet 6 -i /home/xxx/public_html/xxx/sounds/names/george.mp3 /home/xxx/public_html/xxx/upload/sample.mpg");
?>
Thank you a lot!
Firstly, you have a typo in your command: -itsoffet should be -itsoffset.
Secondly, it seems there's a bug affecting -itsoffset.
Take a look at these identical/similar questions:
Add audio (with an offset) to video with FFMPEG
delay audio with ffmpeg
Add multiple audio files to video at specific points using FFMPEG
A while back I used a PNG optimisation service called (I think) "smush it". You fed it a weblink and it returned a zip of all the PNG images with their filesizes nicely, well, smushed...
I want to implement a similar optimisation feature as part of my website's image upload process; does anyone know of a pre-existing library (PHP or Python preferably) that I can tap into for this? A brief Google has pointed me towards several command line style tools, but I'd rather not go down that route if possible.
Execute with PHP this command line tools
pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB -brute -l 9 -max -reduce -m 0 -q IMAGE
optipng -o7 -q pngout.png
pngout pngout.png -q -y -k0 -s0
advpng -z -4 pngout.png > /dev/null
pngcrush
OptiPNG
pngout
advpng
As long as your PHP is compiled with GD2 support (quite common nowadays):
<?php
$image = imagecreatefromstring(file_get_contents('/path/to/image.original.png'));
imagepng($image, '/path/to/image.smushed.png', 9);
This will read in any image format GD2 understands (not just PNG) and output a PNG gzipped as the maximum compression level without sacrificing quality.
It might be of less use today than years ago though; most image editors already do this, since gzipping doesn't cost as much CPU-wise as it used to.
Have you heard of PNGCrush? You could check out the source, part of PNG and MNG Tools at SourceForge, and transcribe or wrap it in Python.
I would question the wisdom of throwing away other chunks (like gAMA and iCCP), but if that's what you want to do it's fairly easy to use PyPNG to remove chunks:
#!/usr/bin/env python
import png
import sys
input=sys.stdin
out=sys.stdout
def critical_chunks(chunks):
for type,data in chunks:
if type[0].isupper():
yield type,data
chunks = png.Reader(file=input).chunks()
png.write_chunks(out, critical_chunks(chunks))
the critical_chunks function is essentially filtering out all but the critical PNG chunks (the 4 letter type for a critical chunk starts with an uppercase letter).