I want to use the font "SegoeUI" and style combination "bold" + "italic" in my pdf. I have to use the function writeHTMLCell(), because HTML code is the source my editor (Quill) delivers. But only "italic" or "bold" works, not the combination of both.
I converted the SegoeUI font files:
segoeui.ttf (regular)
segoeuib.ttf (bold)
segoeuii.ttf (italic)
segoeuibi.ttf (bold italic)
php tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i .\segoeui\segoeui.ttf -o .\segoeui_out php tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i .\segoeui\segoeuib.ttf -o .\segoeui_out php tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i .\segoeui\segoeuibi.ttf -o .\segoeui_out php tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i .\segoeui\segoeuii.ttf -o .\segoeui_out
and copied the output files to "\tcpdf\fonts".
I set the font in code:
$pdf->SetFont('segoeui, '', 11);
And write the text with:
$text = '<p><strong><em>bold italic text</em></strong></p>'; $pdf->writeHTMLCell(0, 0, '', 0, $text, 0, 1, '', true, 'L');
The text in the output pdf is not printed bold and italic. I don't know how tcpdf decides which font file to take for bold+italic.
I tried default font "Helvetica" and it works.
Related
I have a script taken from the correct answer at https://superuser.com/questions/1002562/convert-multiple-images-to-a-gif-with-cross-dissolve, but I am getting no results.
Even copying the code word for word does not work for me.
However, if I run $ffmpeg = shell_exec("ffmpeg -i images/image001.jpg -vf palettegen palette_test.png"); on a single image, it works fine. But because I need it for an animation, I need to create a pallete for all images.
Unless of course, on each image in the GIF sequence I can load a new palette?
$q = "ffmpeg -f image2 -framerate 0.5 -i ../view/client/files/".$dir."/%*.jpg -i palettePro.png -lavfi paletteuse -y .." . $save_path . '/' . $filename . " -report";
Palettegen can analyze and generate a matching frame sequence with per-frame palette.
ffmpeg -i images/image%03d.jpg -vf palettegen=stats_mode=single palette_%03d.png
And then
ffmpeg -framerate 0.5 -i images/image%03d.jpg -framerate 0.5 -i palette_%03d.png -lavfi paletteuse=new=1 -y $filename
i am using laravel framework and also using ffmpeg php library. Actually i have done almost 70% of work. But the problem i faced is to show watermark at multiple areas on video. I have done the watermark on top-left corner that is running very fine on that video. But i want to add watermark in top-left, bottom-left, bottom-right. I have used this code for top-left watermark (for video):-
$inputVideo = public_path('input/airplane_flight_airport_panorama_1080.mp4');
$outputVideo = public_path('uploads/output.mp4');
$watermark = public_path('input/watermark.jpg');
$wmarkvideo = "ffmpeg -i ".$inputVideo." -i ".$watermark." -filter_complex ". '"overlay=x=(main_w-overlay_w):y=(main_h-overlay_h)/(main_h-overlay_h)"'." ".$outputVideo;
exec($wmarkvideo );
Please help me how can i add watermark on top-left, bottom-left, bottom-right in these areas. Thanks in advance :)
This is the ffmpeg command you would use for multiple watermarks
ffmpeg -i inputVideo -i watermark-tr -i watermark-tl -i watermark-br -i watermark-bl
-filter_complex "[0][1]overlay=x=W-w:y=0[tr];
[tr][2]overlay=x=0:y=0[tl];
[tl][3]overlay=x=W-w:y=H-h[br];
[br][4]overlay=x=0:y=H-h" outputfile
tr = top-right; tl = top-left; br = bottom-right; bl = bottomleft
With center as well,
ffmpeg -i inputVideo -i watermark-tr -i watermark-tl -i watermark-br -i watermark-bl -i watermark-c
-filter_complex "[0][1]overlay=x=W-w:y=0[tr];
[tr][2]overlay=x=0:y=0[tl];
[tl][3]overlay=x=W-w:y=H-h[br];
[br][4]overlay=x=0:y=H-h[bl];
[bl][5]overlay=x=(W-w)/2:y=(H-h)/2" outputfile
I am trying to insert text overlay, and can do this, but I cannot add spaces to the text.
ffmpeg -i meme.mp4 -y -vf drawtext='/Users/me/Library/Fonts/Champagne & Limousines.ttf:text='testtext': fontcolor=white: fontsize=24' -codec:a copy outputtexttest.mp4 2>&1
The error when I make testtext test text is:
Unable to find a suitable output format for 'text: fontcolor=white: fontsize=24' text: fontcolor=white: fontsize=24: Invalid argument
Add double quotes around the text, not single quotes like you have:
echo shell_exec("$ffmpeg -i meme.mp4 -y -vf drawtext='/Users/me/Library/Fonts/Champagne & Limousines.ttf:text=''test + infinity spaces :) text'': fontcolor=white: fontsize=24' -codec:a copy outputtexttest.mp4 2>&1");
I am trying to generate a waveform image using ffmpeg.
I have successfully made a waveform image, however it doesn't look very nice...
I have been looking around to try and style the image to make it look nicer, however I have been unable to find any information on this or any tutorials on this.
I am using PHP and shell_exec to create the waveform.
I am aware that there are php library that can do this but due to file format this is a lengthy process.
The code I am using is as follows:
$command = 'convertvid\bin\ffmpeg -i Temp\\'.$file.' -y -lavfi showwavespic=split_channels=0:s='.$width.'x50 Temp\\'.$PNGFileName;
shell_exec($command);
Basically I would like to add a line through the middle as there are blank spots at the moment and would like to be able to set the background and wave colour.
Default waveform
ffmpeg -i input.wav -filter_complex showwavespic -frames:v 1 output.png
Notes
Notice the segment of silent audio in the middle (see "Fancy waveform" below if you want to see how to add a line).
The background is transparent.
Default colors are red (left channel) and green (right channel) for a stereo input. The color is mixed where the channels overlap.
You can change the channel colors with the colors option, such as "showwavespic=colors=blue|yellow". See a list of valid color names or use hexadecimal notation, such as #ffcc99.
See the showwavespic filter documentation for additional options.
If you want a video instead of an image use the showwaves filter.
Fancy waveform
ffmpeg -i input.mp4 -filter_complex \
"[0:a]aformat=channel_layouts=mono, \
compand=gain=-6, \
showwavespic=s=600x120:colors=#9cf42f[fg]; \
color=s=600x120:color=#44582c, \
drawgrid=width=iw/10:height=ih/5:color=#9cf42f#0.1[bg]; \
[bg][fg]overlay=format=auto,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=#9cf42f" \
-frames:v 1 output.png
Explanation of options
aformat downsamples the audio to mono. Otherwise, by default, a stereo input would result in a waveform with a different color for each channel (see Default waveform example above).
compand modifies the dynamic range of the audio to make the waveform look less flat. It makes a less accurate representation of the actual audio, but can be more visually appealing for some inputs.
showwavespic makes the actual waveform.
color source filter is used to make a colored background that is the same size as the waveform.
drawgrid adds a grid over the background. The grid does not represent anything, but is just for looks. The grid color is the same as the waveform color (#9cf42f), but opacity is set to 10% (#0.1).
overlay will place [bg] (what I named the filtergraph for the background) behind [fg] (the waveform).
Finally, drawbox will make the horizontal line so any silent areas are not blank.
Gradient example
Using gradients filter:
ffmpeg -i input.mp3 -filter_complex "gradients=s=1920x1080:c0=000000:c1=434343:x0=0:x1=0:y0=0:y1=1080,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=#0000ff[bg];[0:a]aformat=channel_layouts=mono,showwavespic=s=1920x1080:colors=#0068ff[fg];[bg][fg]overlay=format=auto" -vframes:v 1 output.png
Color background
ffmpeg -i input.opus -filter_complex "color=c=blue[color];aformat=channel_layouts=mono,showwavespic=s=1280x720:colors=white[wave];[color][wave]scale2ref[bg][fg];[bg][fg]overlay=format=auto" -frames:v 1 output.png
The scale2ref filter automatically makes the background the same size as the waveform.
Image background
Of course you can use an image or video instead for the background:
ffmpeg -i audio.flac -i background.jpg -filter_complex \
"[1:v]scale=600:-1,crop=iw:120[bg]; \
[0:a]showwavespic=s=600x120:colors=cyan|aqua[fg]; \
[bg][fg]overlay=format=auto" \
-q:v 3 showwavespic_bg.jpg
Getting waveform stats and data
Use the astats filter. Many stats are available: RMS, peak, min, max, difference, etc.
RMS level per audio frame
Example to get standard RMS level measured in dBFS per audio frame:
ffprobe -v error -f lavfi -i "amovie=input.wav,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of csv=p=0 > rms.log
Peak level per second
Add the asetnsamples filter.
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.Peak_level -of csv=p=0
Same as above but with timestamps
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame=pkt_pts_time:frame_tags=lavfi.astats.Overall.Peak_level -of csv=p=0
Output to file
Just append > output.log to the end of your command:
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of csv=p=0 > output.log
JSON
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of json > output.json
This is my ffmpeg process:
exec("/usr/local/bin/ffmpeg -y -i source.avi dest.mp4 >/dev/null 2>/dev/null &
Now, I wish to execute a PHP file after the conversion is complete. Logically, this is what I have:
exec("/usr/local/bin/ffmpeg -y -i source.avi dest.mp4 >/dev/null 2>/dev/null ; php proceed.php &
This doesn't work though, since then PHP will hold up the process to wait till the ffmpeg conversion is complete. What I want is basically to call proceed.php after the conversion completes, both of which are done in the background.
If anyone can provide the Windows server solution, that will be awesome too.
Write an external (bash/php) script that executes both the ffmpeg and php process, and tack & after that.
For windows, please open a new question on SO.
To add on to what Evert had posted, here is an example of what I use for my FFMPEG bash script... it's far from done (it doesn't alert if the program crashes, for instance) but it's somewhere to start:
#!/bin/sh
## Set our paths
FFMPEG_PATH=/usr/local/bin
SITE_PATH=path_to_file
VIDEO_PATH=$SITE_PATH/public_html/videos
## Make sure we have permissions to do this stuff
chown -R wwwrun:www $VIDEO_PATH/$2
chmod -R 765 $VIDEO_PATH/$2
## Set the options for mp4 compression
options="-vcodec libx264 -b 512k -ar 22050 -flags +loop+mv4 -cmp 256 \
-partitions +parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 \
-me_method hex -subq 7 -trellis 1 -refs 5 -bf 3 \
-flags2 +bpyramid+wpred+mixed_refs+dct8x8 -coder 1 -me_range 16 \
-g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -qmin 10\
-qmax 51 -qdiff 4"
## Start the conversion.
$FFMPEG_PATH/ffmpeg -y -i $VIDEO_PATH/$2/original/$1 -an -pass 1 -threads 2 $options $VIDEO_PATH/$2/$2.mp4 2> $VIDEO_PATH/$2/pass_one.log
$FFMPEG_PATH/ffmpeg -y -i $VIDEO_PATH/$2/original/$1 -acodec libfaac -ab 96k -pass 2 -threads 2 $options $VIDEO_PATH/$2/$2.mp4 2> $VIDEO_PATH/$2/pass_two.log
## Create the thumbnail for the video
. $SITE_PATH/bin/create_thumbnail $2 00:00:15 2> $VIDEO_PATH/$2/generate_thumbnails.log
## Clean up the log files that were created
## find /log_path/ -name *log* -exec rm {} \;
## Update datbase and send email that we're done here.
php $SITE_PATH/public_html/admin/includes/video_status.php converting_finished $2
And this all gets called from a PHP file that does (along with some other code):
proc_close(proc_open(server_path.'/bin/convert_video_mp4 '.mysql_result($next_video, 0, "uid").'.'.mysql_result($next_video, 0, "original_ext").' '.mysql_result($next_video, 0, "uid").' &', array(), $foo));
PS - I know mysql extension are on their way out, I haven't been using or updating this code in a while, so please update to your specifications