I have the following function that runs thought about 25 times and delays the site's load time by 10 seconds or more. What the code is essentially doing is working out the height when the image's width is scaled down or up to 310px. Any suggestions on how I could improve my code or suggest another option? Maybe jQuery might be better for this?
function img_height($image){
$inputwidth = 310;
list($width,$height) = getimagesize($image);
if($width !== $inputwidth){
$outputheight = ($inputwidth * $height)/ $width;
}elseif($width == $inputwidth){
$outputheight = $height;
}
return 'style="height:'.$outputheight.'px;" ';
}
#Enigmo - I have worked a lot with image loading and dynamically changing sizes. You really cannot make a big difference in the loading time using PHP. I would suggest you to use AJAX and preload the images or do a lazy loading. That way your site gets loaded first and then the images keep showing up as and when they are loaded.
I would suggest storing images sizes alongside with the image names in some db structure (cache). Then, you would know already all the sizes and your site would work blazingly fast.
you can simply use jquery stuff to do this stuff as this will be getting fired at client browser and not making any load/processing on server side. also jquery is faster than php processing also
Related
I'm building a web based system, which will host loads and loads of highres images, and they will be available for sale. Of course I will never display the highres image, instead when browsing people will only see a low resolution, watermarked image. Currently the workflow is as follows:
PHP script handles the highres image upload, when image is uploaded, it's automatically re-sized to a low res image and to a thumbnail image as well and both of the files are saved on the server, (no watermark is added).
When people are browsing, the page displays the thumbnail of the image, on click, it enlarges and displays the lowres image with watermark as well. At the time being I apply the watermark on the fly whenever the lowres image is opened.
My question is, what is the correct way:
1) Should I save a 2nd copy of the lowres image with thumbnail, only when it's access for the first time? I mean if somebody access the image, I add the watermark on the fly, then display the image & store it on the server. Next time the same image is accessed if a watermarked copy exist just display the wm copy, otherwise apply watermark on the fly. (in case watermark.png is changed, just delete the watermarked images and they will be recreated as accessed).
2) Should I keep applying watermarks on the fly like I'm doing now.
My biggest question is how big is the difference between a PHP file_exists(), and adding a watermark to an image, something like:
$image = new Imagick();
$image->readImage($workfolder.$event . DIRECTORY_SEPARATOR . $cat . DIRECTORY_SEPARATOR .$mit);
$watermark = new Imagick();
$watermark->readImage($workfolder.$event . DIRECTORY_SEPARATOR . "hires" . DIRECTORY_SEPARATOR ."WATERMARK.PNG");
$image->compositeImage($watermark, imagick::COMPOSITE_OVER, 0, 0);
All lowres images are 1024x1024, JPG with a quality setting of 45%, and all unnecessary filters removed, so the file size of a lowres image is about 40Kb-80Kb.
It is somehow related to this question, just the scale and the scenarios is a bit different.
I'm on a dedicated server (Xeon E3-1245v2) cpu, 32 GB ram, 2 TB storage), the site does not have a big traffic overall, but it has HUGE spikes from time to time. When images are released we get a few thousand hits per hours with people browsing trough the images, downloading, purchasing, etc. So while on normal usage I'm sure that generating on the fly is the right approach, I'm a bit worried about the spike period.
Need to mention that I'm using ImageMagick library for image processing, not GD.
Thanks for your input.
UPDATE
None of the answers where a full complete solution, but that is good since I never looked for that. It was a hard decision which one to accept and whom to accord the bounty.
#Ambroise-Maupate solution is good, but yet it's relay on the PHP to do the job.
#Hugo Delsing propose to use the web server for serving cached files, lowering the calls to PHP script, which will mean less resources used, on the other hand it's not really storage friendly.
I will use a mixed-merge solution of the 2 answers, relaying on a CRON job to remove the garbage.
Thanks for the directions.
Personally I would create a static/cookieless subdomain in a CDN kinda way to handle these kind of images. The main reasons are:
Images are only created once
Only accessed images are created
Once created, an image is served from cache and is a lot faster.
The first step would be to create a website on a subdomain that points to an empty folder. Use the settings for IIS/Apache or whatever to disable sessions for this new website. Also set some long caching headers on the site, because the content shouldn't change
The second step would be to create an .htaccess file containing the following.
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*) /create.php?path=$1 [L]
This will make sure that if somebody would access an existing image, it will show the image directly without PHP interfering. Every non-existing request will be handled by the create.php script, which is the next thing you should add.
<?php
function NotFound()
{
if (!headers_sent()) {
$protocol = (isset($_SERVER['SERVER_PROTOCOL']) ? $_SERVER['SERVER_PROTOCOL'] : 'HTTP/1.0');
header($protocol . ' 404 Not Found');
echo '<h1>Not Found</h1>';
exit;
}
}
$p = $_GET['path'];
//has path
if (strlen($p)<=1)
NotFound();
$clean = explode('?', $p);
$clean = explode('#', $clean[0]);
$params = explode('/', substr($clean[0], 1)); //drop first /
//I use a check for two, because I dont allow images in the root folder
//I also use the path to determine how it should look
//EG: thumb/125/90/imagecode.jpg
if (count($params)<2)
NotFound();
$type = $params[0];
//I use the type to handle different methods. For this example I only used the full sized image
//You could use the same to handle thumbnails or cropped/watermarked
switch ($type) {
//case "crop":if (Crop($params)) return; else break;
//case "thumb":if (Thumb($params)) return; else break;
case "image":if (Image($params)) return; else break;
}
NotFound();
?>
<?php
/*
Just some example to show how you could create a responds
Since you already know how to create thumbs, I'm not going into details
Array
(
[0] => image
[1] => imagecode.JPG
)
*/
function Image($params) {
$tmp = explode('.', $params[1]);
if (count($tmp)!=2)
return false;
$code = $tmp[0];
//WARNING!! SQL INJECTION
//USE PROPER DB METHODS TO GET REALPATH, THIS IS JUST EXAMPLE
$query = "SELECT realpath FROM images WHERE Code='".$code."'";
//exec query here to $row
$realpath = $row['realpath'];
$f = file_get_contents($realpath);
if (strlen($f)<=0)
return false;
//create folder structure
#mkdir($params[0]);
//if you had more folders, continue creating the structure
//#mkdir($params[0].'/'.$params[1]);
//store the image, so a second request won't access this script
file_put_contents($params[0].'/'.$params[1], $f);
//you could directly optimize the image for web to make it even better
//optimizeImage($params[0].'/'.$params[1]);
//now serve the file to the browser, because even the first request needs to show the image
$finfo = finfo_open(FILEINFO_MIME_TYPE);
header('Content-Type: '.finfo_file($finfo, $params[0].'/'.$params[1]));
echo $f;
return true;
}
?>
I would suggest you to create watermarked images on-the-fly and to cache them at the same time as everybody suggested.
Then you could create a garbage-collector PHP script that will be executed every days (using cron). This script will browse your cache folder to read every image access time. This can done using fileatime() PHP method. Then when a cached wm image has not been accessed within 24 or 48 hours, just delete it.
With this method, you can handle spike periods as images are cached at the first request. AND you will save your HDD space as your garbage-collector script will delete unused images for you.
This method will only work if your server partition has atime updates enabled.
See http://php.net/manual/en/function.fileatime.php
For most scenarios, lazily applying the watermark would probably make most sense (generate the watermarked image on the fly when requested then cache the result) however if you have big spikes in demand you are creating a mechanism to DOS yourself: create the watermarked version on upload.
Considering your HDD storage capacity and Pikes.
I would only create a watermarked image if it is viewed.(so yes on the fly) In that way you dont use to much space with a bunch a files that are or might not be viewed.
I would not watermark thumbnails i would rather make a filter that fake watermark and protect from being saved. That filter would apply to all thumbnails without creating a second image.
In this way all your thumbbails are watermarked (Fake with onther element on top).
Then if one of these thumbnails is viewed it generate a watermarked image (only once) since after its generated you load the new watermarked image.
This would be the most efficient way to deal with your HDD storage and Pikes.
The other option would be to upgrade your hosting services. Godaddy offer unlimited storage and bandwith for about 50$ a year.
A site I am working on has a lot of images that are pulled from a database. The dimensions of the images are not consistent and I am trying to display them in uniformly sized boxes (divs). I do not know the dimensions of any of the images but I can retrieve them with:
document.getElementById( myImage ).width
document.getElementById( myImage ).height
After this I do my tests to see how to resize images to fit the uniform boxes. Finally I set the effects with:
document.getElementById( myImage ).width = theNewWidth
document.getElementById( myImage ).height = theNewHeight
This function is only called once per image by using onload="resizingFunction( imgId );" in the img tag. It takes about 1-2 seconds for every image in the database to complete this function and the function is never run for any of those images again. Despite never running again, the site runs significantly slower if I use this function. After googling I tried adding:
document.getElementById( myImage ).removeAttribute("width")
document.getElementById( myImage ).removeAttribute("height")
Before setting the new width and height. This did improve the speed but it is still slower than if I had not resized the images. Again, just for clarification, each image is resized one time after it has been loaded but for some reason this still slows down the site.
Images are created by being PHP echoed into JavaScript. This is necessary because they need information from the database (PHP), and the JavaScript places them inside the correct box (div). Here is the creation of image code:
echo "\t\t\tdocument.getElementById('gBox".$i."').innerHTML = '<img onload=\"image_applyToGrid(".$i.");\" id=\"img".$i."\" style=\"left:0; top:0;\" src=\"'+gBoxes[".$i."].imgPath+'\"/>';\n";
Here is the image resizing function that images call once onload:
function image_applyToGrid(inId) {
inIdImage = document.getElementById("img"+inId);
var imgW = inIdImage.width;
var imgH = inIdImage.height;
if (imgW > imgH) {
var proportions = imgW/imgH;
imgH = gridUnit;
imgW = gridUnit*proportions;
inIdImage.style.left = -((imgW-gridUnit)>>1)+"px";
}
else {
var proportions = imgH/imgW;
imgW = gridUnit;
imgH = gridUnit*proportions;
inIdImage.style.top = -((imgH-gridUnit)>>1)+"px";
}
inIdImage.removeAttribute("width");
inIdImage.removeAttribute("height");
inIdImage.width = imgW;
inIdImage.height = imgH;
}
Resizing images with Javascript is generally not an ideal approach. You are consuming all of the bandwidth to send the full size images across the web and then scaling them down. A better way would be to pull the images from your image store and resize them server side. Then store the result in a server side cache so you can provide all of your client requests with the optimized images. No need to over think the concept of cache here, in this case it could be as simple as a directory or a new column in your database. (FWIW, I prefer not to store binary data in databases but that's probably another discussion)
See: http://www.white-hat-web-design.co.uk/blog/resizing-images-with-php/
The images are probably much too large. I would suggest preparing the images for web use. Google "optimizing images for web" for some ideas on how to do this.
It's bad practice to take full-size images and load the entire image at different dimensions. Consider having thumbnails generated and placed on the page.
If you have a 2500x2500 image that you're just setting the height and width to 500x500, the entire 2500x2500 image is still having to be loaded.
I'm building a site that depends on bookmarklets. These bookmarklets pull the URL and a couple of other elements. However, I need to select 1 image from the page the user bookmarks. Currently I'm trying to use the PHP Simple HTML DOM Parser http://simplehtmldom.sourceforge.net/
It pulls the HTML as expected, and returns the tags as expected. However, I want to take this a step further and only return images with a min width of 40px. I know about the function getimagesize() but from what I understand, this is resource heavy. Is there a better method available to pre-process the image and achieve the results I'm looking for?
Thanks!
First check if the image HTML tag has a width attribute. If it's above 40, skip over it. As Matthew mentioned, it will get false positives where people sized down a large image to 40px wide, but that's no big deal; the point of this step is to quickly weed out the first dozen or so images that are obviously too big.
Once the script catches an image that SAYS it's under 40px wide, check the header information to deduce a general width based on the size of the file. This is faster than getimagesize because you don't have to download the image to get the info.
function get_image_kb($path) {
$headers = get_headers($path);
$len = explode(" ",$headers[6]);
return $len[1];
}
$imageKb = get_image_kb('test1.jpg');
// I'm going to gander 40x80 is about 2000kb
$cutoffSize = 2000;
if ($imageKb < $cutoffSize) {
// this is the one!
}
else {
// it was a phoney, keep scraping
}
Setting it at 2000kb will also let through images that are 100x30, which isn't good.
However, at this point, you've weeded out most of the huge 800kb files that would really slow you down, and because we know it's under 2kb, it's not too taxing to test this one with getimagesize() to get an accurate width.
You can tweak the process depending on how picky you are for the 40px mark, as usual higher accuracy takes more time, and vice versa.
On my server, I have three files per image.
A thumbnail file, which is cropped to 128 by 128.
A small file, which I aspect fit to a max of 160 by 240.
A large file, which I aspect fit to a max of 960 by 540.
My method for returning these URLs to three20's gallery looks like this:
- (NSString*)URLForVersion:(TTPhotoVersion)version {
switch (version) {
case TTPhotoVersionLarge:
return _urlLarge;
case TTPhotoVersionMedium:
return _urlSmall;
case TTPhotoVersionSmall:
return _urlSmall;
case TTPhotoVersionThumbnail:
return _urlThumb;
default:
return nil;
}
}
After having logged when these various values are called, the following happens:
When the thumbnail page loads, only thumbnails are called (as expected)
When an image is tapped, the thumbnail appears, and not the small image.
After that thumbnail appears, the large image is loaded directly (without the small image being displayed).
What I desire to happen is the following
This is the same (thumbnails load as expected on the main page)
When the image is tapped, the small image is loaded first
Then after that, the large image is loaded.
Or, the following
Thumbnails
Straight to large image.
The problem with the thumb, is that I crop it so it is a square.
This means that when a thumbnail image is displayed in the main viewer (after thumb was tapped), it is oversized, and when the large image loads, it immediately scales down to fit.
That looks really bad, and to me, it would make far more sense if it loaded the thumbs in the thumbnail view, and then the small image followed by the large image in the detail view.
Does anyone have any suggestions on how to fix this?
Is the best way simply to make the thumbs the same aspect ratio?
I would appreciate any advice on this issue
Looking at the three20 source I can see that TTPhotoView loads the preview image using the following logic:
- (BOOL)loadPreview:(BOOL)fromNetwork {
if (![self loadVersion:TTPhotoVersionLarge fromNetwork:NO]) {
if (![self loadVersion:TTPhotoVersionSmall fromNetwork:NO]) {
if (![self loadVersion:TTPhotoVersionThumbnail fromNetwork:fromNetwork]) {
return NO;
}
}
}
return YES;
}
The problem is that as your small image is on the server and not locally the code skips the image and uses the Thumbnail for the preview.
I would suggest that your best solution would be to edit the thumbnails so that they have the same aspect ratio as the large images. This is what the developer of this class seems to have expected!
I think you have three ways to go here:
modify the actual loadPreview implementation from TTPhotoView so that it implements the logic you want (i.e., allowing loading the small version from the network);
subclass TTPhotoView and override loadPreview to the same effect as above;
pre-cache the small versions of your photos; i.e, modify/subclass TTThumbView so that when TTPhotoVersionThumbnail is set, it pre-caches the TTPhotoVersionSmall version; in this case, being the image already present locally, loadPreview will find it without needing to go out for the network; as an aside, you might do the pre-caching at any time that you see fit for your app; to pre-cache the image you would create a TTButton with the proper URL (this will both deal with the TTURLRequest and the cache for you);
otherwise, you could do the crop on-the-fly from the small version to the thumbnail version by using this UIImage category; in this case you should also tweak the way your TTThumbView is drawn by overriding its imageForCurrentState method so that the cropping is applied when necessary. Again, either you modify directly TTThumbView or you subclass it; alternatively, you can define layoutSubviews in your photo view controller and modify there each of the TTThumbViews you have:
- (void)layoutSubviews {
[super layoutSubviews];
for (NSInteger i = 0; i < _thumbViews.count; ++i) {
TTThumbView* tv = [_thumbViews objectAtIndex:i];
[tv contentForCurrentState].image = <cropped image>;
If you prefer not using the private method contentForCurrentState, you could simply do:
[tv addSubview:<cropped image>];
As you can see, each option will have its pros and cons; 1 and 2 are the easiest to implement, but the small version will be loaded from the network so it could add some delay; the same holds true for 4, although the approach is different; 3 gives you the most responsive implementation (no additional delay from the network, since you pre-cache), but it is possibly the most complex solution to implement (either you download the image and cache it yourself, or use TTButton to do that for you, which is kind of not very "clean").
Anyway, hope it helps.
I have a site where users can upload images. I process these images directly and resize them into 5 additional formats using the CodeIgniter Image Manipulation class. I do this quite efficiently as follow:
I always resize from the previous format, instead of from the original
I resize using an image quality of 90% which about halves the file size of jpegs
The above way of doing things I implemented after advise I got from another question I asked. My test case is a 1.6MB JPEG in RGB mode with a high resolution of 3872 x 2592. For that image, which is kind of borderline case, the resize process in total takes about 2 secs, which is acceptable to me.
Now, only one challenge remains. I want the original file to be compressed using that 90% quality but without resizing it. The idea being that that file too will take half the file size. I figured I could simply resize it to its' current dimensions, but that doesn't seem to do anything to the file or its size. Here's my code, somewhat simplified:
$sourceimage = "test.jpg";
$resize_settings['image_library'] = 'gd2';
$resize_settings['source_image'] = $sourceimage;
$resize_settings['maintain_ratio'] = false;
$resize_settings['quality'] = '90%';
$this->load->library('image_lib', $resize_settings);
$resize_settings['width'] = $imagefile['width'];
$resize_settings['height'] = $imagefile['height'];
$resize_settings['new_image'] = $filename;
$this->image_lib->initialize($resize_settings);
$this->image_lib->resize();
The above code works fine for all formats except the original. I tried debugging into the CI class to see why nothing happens and I noticed that the script detects that the dimensions did not change. Next, it simply makes a copy of that file without processing it at all. I commented that piece of code to force it to resize but now still nothing happens.
Does anybody know how to compress an image (any image, not just jpegs) to 90% using the CI class without changing the dimensions?
I guess you could do something like this:
$original_size = getimagesize('/path/to/original.jpg');
And then set the following options like this:
$resize_settings['width'] = $original_size[0];
$resize_settings['height'] = $original_size[1];
Ok, so that doesn't work due to CI trying to be smart, the way I see it you've three possible options:
Rotate the Image by 360ยบ
Watermark the Image (with a 1x1 Transparent Image)
Do It Yourself
The DIY approach is really simple, I know you don't want to use "custom" functions but take a look:
ImageJPEG(ImageCreateFromString(file_get_contents('/path/to/original.jpg')), '/where/to/save/optimized.jpg', 90);
As you can see, it's even more simpler than using CI.
PS: The snippet above can open any type of image (GIF, PNG and JPEG) and it always saves the image as JPEG with 90% of quality, I believe this is what you're trying to archive.