What is the main difference between IMG_CROP_THRESHOLD and IMG_CROP_SIDES?
I have been trying to roughly crop the background out of a photo of a document, but either way I am not able to get the outcome I am aiming for.
Also, I took the null, 16777215 from https://www.php.net/manual/en/function.imagecropauto.php, but I honestly don't understand what they actually do. I would have expected to put something like 0.1, #FFFFFF to crop out a background that is white or close to it. What is the null, 16777215 all about?
$cropped = imagecropauto($img, IMG_CROP_THRESHOLD, null, 16777215);
$cropped = imagecropauto($img, IMG_CROP_SIDES);
IMG_CROP_SIDES works by automatically calculating the average color of the pixels around the border of an image an crops off anything within 50% of that value. -
Correction
IMG_CROP_SIDES uses gdGuessBackgroundColorFromCorners which essentially finds the distance from the corner colors to the closest color in the existing palette and then uses this distance for pixel selection to crop.
IMG_CROP_THRESHOLD does NOT calculate the background color but it does provide more flexibility in this calculation as it allows the dev to specify the border color and also the threshold. Documentation is poor on the threshold stating it's a percentage however this actually means an integer or float between 0 and 100 (e.g. not 0.25 but 25).
If IMG_CROP_SIDES is cutting into a bright image, use IMG_CROP_THRESHOLD instead.
For example with a 25 threshold - Approx anything lighter than #E8E8E8 is border.
$cropped = imagecropauto($img, IMG_CROP_THRESHOLD, 25, 16777215);
https://github.com/libgd/libgd/blob/167ea1f4f0003f3e9f7ca1e586189e99cf33d47f/src/gd.c#L460
https://github.com/libgd/libgd/blob/1e47a89a65d49a9003d8365da4e26a7c1a32aa51/src/gd_crop.c#L112
Related
I have 2 images I am placing on top of a 1080X1920 canvas.
One is a rectangle that is 800x400 and it is sitting on the 1080X1920 canvas with top left coordinates of x=140 and y=1200
Then I have another image that is the same size of the canvas 1080X1920, but also has a rectangle on it at the exact same coordinates as the first rectangle. I am overlaying this 1080X1920 image at x=0 and y=0 on the canvas so that the rectangle already in this image lines up perfectly with the rectangle that is already placed on the canvas.
My problem is, I need to apply a rotation to both of these and the black and red rectangles need to match up in positioning AFTER the rotation is applied. Could be any rotation, but let's say it is a 15 degree rotation.
When each element is placed on the canvas and then the 15 degree rotation is applied, the rectangles no longer align because of the difference in image size and the offset in rotation as they both rotate around the center point which looks to be my only option in this case.
So I am hoping to sort out a formula I can use that would rectify the positioning of the 1080X1920 image so that the object already embedded in that image lines up with separately overlaid image.
There are of course other ways to deal with this problem, but right now, they would make things quite a bit more difficult, so I wanted to see if this was possible to calculate first.
I have tried several ways to calculate this, but am not super mathematically proficient, so I am grasping at straws at best.
Oh and because I am not extremely mathematically proficient, any dumbing-down of mathematical terms is appreciated. ;)
Oh and possibly this post answers this question, but I can't wrap my head around whether or not it does, so if someone can let me know if it does, I will try harder to understand and apply it to my particular case.
How to recalculate the coordinates of a point after scaling and rotation?
Any rotation is done around a "center of rotation". You don't tell the centers you use, but they can be:
Center of the canvas.
Center of each image (middle point of its four corners),
Some corner.
Any other point.
If both rotations are not the same, then there's not possible match.
It seems you use the center of each image. Then, to match the second rectangle to the first one, after you rotate the first image you must do in this order:
Translate the second image so its center of rotation is exactly the
same as the center of rotation of the first image. The vector of translation is the coordinates difference for X,Y centers.
Rotate the second image with the same angle as the first image rotation.
This boils down to tracking where the original (0,0) points are on the two images after rotation.
Let's define the problem a bit cleaner:
red.png: 800x400
black.png: 1080X1920
rotate both by 15° (or θ = 15 * π/180) with rotate filter (assuming the actual values are within -90° and 90°)
how to place a rotated red.png on rotated black.png at the ORIGINAL top-left coordinates (x=140,y=120)
Consider 2 FFmpeg rotation commands:
ffmpeg red.png -vf rotate=15*PI/180:ow=hypot(iw\\,ih):oh=ow -frames:v 1 red_rotated.png
ffmpeg black.png -vf rotate=15*PI/180 -frames:v 1 black_rotated.png
Note that red_rotated.png is enlarged to inscribe the red rectangle while black_rotated.png maintains the same size. Now, the question is "where are the original top-left corner now?"
red_rotated.png:
0 < θ < π/2 cases: (xr,yr) = (h sin(θ), 0)
-π/2 < θ < 0 cases: (xr,yr) = (0, w sin(θ))
black_rotated.png: Same as red_rotated.png but now cropped to the original size
new size: ow = w cos(θ) + h sin(|θ|), oh = w sin(|θ|) + h cos(θ)
size delta: dw = (ow - w)/2, dh = (oh - h)/2
0 < θ < π/2 cases: (xb, yb) = (h sin(-θ) - dw, -dh)
-π/2 < θ < 0 cases: (xb, yb) = (-dw, w sin(θ) - dh)
Now, where is the insertion coordinate (x,y) = (140,120) on black_rotated.png:
rotate wrt the original corner: (x1,y1) = (x cos(θ) - y sin(θ), x sin(θ) + y cos(θ)
shift wrt the new black corner: (x2,y2) = (x1 + xb, y1 + yb)
shift wrt the new red corner: (x3,y3) = (x2 - xr, y2 - yr)
Accordingly overlaying red_rotated.png with the offset (x3,y3) onto black_rotated.png should get you the results you want.
Disclaimer: I have not verified my math, but this is should be a good starting point.
I'm using imagettftext() to write dynamic text on an image and I want it to fit my image width.
How can I calculate the font size by the text lenght?
You can calculate the bounding box of TTF text before outputting it with the imagettfbbox function. Unfortunately there is no direct way of scaling to fit a width, so you'll have to do it yourself.
One way of doing it is to pass the text with a default font size of, say 20, to imagettfbbox and retrieve the width from it. You can then calculate how much smaller or bigger the text should be to fit the size you want by calculating a scale factor:
scale = targetWidth / bboxWidth;
Then draw the text with the proper size:
fontSize = 20 * scale;
using the imagettftext function. Fonts don't scale 100% perfectly this way, but you'll get a very good approximation.
See the documentation of imagettfbox here.
while (itsTooBigAccordingToimagettftext() && $fontSize > 0) {
$fontSize--;
}
I have the following php code:
<?php
$image = imagecreatefrompng("captcha_background.png");
$imgcolor = imagecolorallocate($image, 0, 0, 0);
imagesetthickness($image, 2);
imageline($image, 0, 25, 40, 90, $imgcolor);
?>
The method "imageline" draws a straight line on my image from the coordinates 0 (x) 25 (y) to 40 (x) 90 (y).
The result is the following image:
What I'm confused about is the reverse of the bottom and the top when using coordinate systems in php.
Normally 0 (The starting point) would be in the lower left corner, but when assigning coordinates in the method "imageline" the 0 (Starting point) is located in the upper left corner?
Expected result:
(The image is 300x100 pixels)
Could someone please explain why this is happening?
This is not a mathematical graph. The typical coordinate system used in development (as far as I know) is to have the first quadrant at the lower right. That is, 0x0 is at the top left. This applies to all html elements that have widths and heights (the elements drop down, they do not fall up).
The motivation appears to be the fact that it's hard to tell how much height you have to work with without knowing the absolute height of the image, which you may not know at any given time, and which may change frequently.
That's how the coordinates are defined in GD, nothing to worry about.
http://www.php.net/manual/en/function.imagedashedline.php :
y1: Upper left y coordinate 0, 0 is the top left corner of the image.
I believe this is the standard for the GD image library as they define the natural origin as the top-left corner.
Would like to turn this into black and white.. can't figure out what to use from imagick..
$handle_data = file_get_contents('http://www.bungie.net/Stats/Reach/Nightmap.ashx');
//http://www.bungie.net/Stats/Halo3/Nightmap.ashx
$img = new Imagick();
$img->readImageBlob($handle_data);
$img->writeImage('nightmap/'.$time.'.gif');
Using Imagick::modulateImage could be a quick&dirty solution. Dirty because color theory is a rather complex field, and there can be done more to create grayscale images than just desaturating the image (like applying different weights to the single color channels).
bool Imagick::modulateImage (float $brightness , float $saturation , float $hue)
Given an image, keep brightness and hue at 100%, while setting saturation to 0%. There is an example at the bottom of the documentation page that does exactly that.
There's a much better (and just as simple) solution: $im = $im->fxImage('intensity');
That applies a function to the image, where intensity is equal to 0.299*red+0.587*green+0.114*blue.
That formula is based on how our eyes are more sensitive to different colours, and as such the difference between that and a "flat" grayscale image really is night and day.
More details here:
http://php.net/manual/en/imagick.fximage.php
http://www.imagemagick.org/script/fx.php
I have run into some trouble with the gd library's imagefilledpolygon().
For some reason some of my lines were ending up 1px out of place so I decided to debug it using imagepixelset to set the colour of my shapes points to red.
alt text http://www.degreeshowcase.com/other/1.gif
if you look at the picture you can see some of the points are inside the shape ... some are outside....its very illogical.
(the picture has been scaled up to make it more visible)
Does anyone have a solution?
Update:
My points for the shape above were: 0,0 40,0 40,20 20,20 20,40 0,40
I require that the height and width of the shape produced should be a multiple of 20.... but for some reason some of the shape ends up 21 px high or wide.
I have made a script to work out what the points would be to get the shape I wanted but I can not work out why and so I can't work out a script to correct all my shapes.
<?php
// set up array of points for polygon
$values = array(0,0, 39,0, 39,20, 19,20, 19,39, 0,39);
//My original values were 0,0 40,0 40,20 20,20 20,40 0,40
//I do not understand why some values require minus 1 and others can remain as they were (a multiple of 20)
// create image
$image = imagecreatetruecolor(40, 40);
// allocate colors
$bg = imagecolorallocate($image, 200, 200, 200);
$blue = imagecolorallocate($image, 0, 0, 255);
// fill the background
imagefilledrectangle($image, 0, 0, 39, 39, $bg);
// draw a polygon
imagefilledpolygon($image, $values, 6, $blue);
// flush image
header('Content-type: image/png');
imagepng($image);
imagedestroy($image);
?>
My guess is that you're mixing up width with position.
For example a line from 0px to 9px is 10px long... if you used length as the second parameter instead of position, it would end up 11px long.
If I could see some code I could confirm this.
Normal polygon rendering ensures that each pixel can only be in one polygon, if the 2 polygons share an edge. If you imagine drawing 2 squares, next to each other, so they share a common edge, you don't want to render the pixels along that edge twice.
There is an explanation of determining which pixels on the edge of a polygon should be considered inside the polygon here: http://www.gameprogrammer.com/5-poly.html
A common solution is to say that "pixels on the left and top edges of a polygon belong to the polygon and pixels on the right and bottom edges don't". I am not 100% sure what solution GD uses, as I could not find any documentation on it, but I expect it is something similar to this.
I spoke to the guy who currently develops the GD library he explained that it follows the 'Winding number algorithm' - can be found here. Having looked at my example image, it does match how the 'winding number algorithm' works, however the function should take this into account and produce the shape that was input.
As far as I can see the only way to accurately (to the pixel) generate a concave polygon with this function is to write another function that also applies the winding rule to your coordinates and adjusts them accordingly and then put it into the function.