http://www.gidnetwork.com/tools/gzip-test.php is a great tool, but I need to test a site behind some walls and want to find similar results: compression type, markup size, compressed page size, and compression ratio.
How can I get both compressed and uncompressed page sizes? What is a better method for fetching this data (file_get_contents() or curl)?
The response headers contain everything else needed - I'm just not sure about the sizes.
You have to use the CURL as there are two reasons to do so:
As you need the Headers of the request and you can have through CURL easily
file_get_contents() generally used internally to include things and not for external source to fetch data. If you used it for external purposes then there are some issue like memory limit, size etc. have to be taken great care of.
$headerArray = get_headers($pointer);
$fileSize = $headerArray['Content-Length'];
Related
I noticed that PHP Imagick changes the IDAT chunks when processing PNGs.
How exactly is this done? Is there a possibility to create IDAT chunks that remain unchanged? Is it possible to predict the outcome of Imagick?
Background information to this questions:
I wondered whether the following code (part of a PHP file upload) can prevent hiding PHP code (e.g. webshells) in PNGs:
$image = new Imagick('uploaded_file.png');
$image->stripImage();
$image->writeImage('secure_file.png');
Comments are stripped out, so the only way to bypass this filter is hiding the PHP payload in the IDAT chunk(s). As described here, it is theoretically possible but Imagick somehow reinterprets this Image data even if I set Compression and CompressionQuality to the values I used to create the PNG. I also managed to create a PNG whose ZLIB header remained unchanged by Imagick, but the raw compressed image data didn't. The only PNGs where I got identical input and output are the ones which went through Imagick before. I also tried to find the reason for this in the source code, but couldn't locate it.
I'm aware of the fact that other checks are necessary to ensure the uploaded file is actually a PNG etc. and PHP code in PNGs is no problem if the server is configured properly, but for now I'm just interested in this issue.
IDAT chunks can vary and still produce an identical image. The PNG spec unfortunately forces the IDAT chunks to form a single continuous data stream. What this means is that the data can be grouped/chunked differently, but when re-assembled into a single stream will be identical. Is the actual data different or is just the "chunking" changed? If the later, why does it matter if the image is identical? PNG is a lossless type of compression, stripping the metadata and even decompressing+recompressing an image shouldn't change any pixel values.
If you're comparing the compressed data and expecting it to be identical, it can be different and still yield an identical image. This is because FLATE compression uses an iterative process to find the best matches in previous data. The higher the "quality" number you give it, the more it will search for matches and shrink the output data size. With zlib, a level 9 deflate request will take a lot longer than the default and result in slightly smaller output data size.
So, please answer the following questions:
1) Are you trying to compare the compressed data before/after your strip operation to see if somehow the image changed? If so, then looking at the compressed data is not the way to do it.
2) If you want to strip metadata without any other aspect of the image file changing then you'll need to write the tool yourself. It's actually trivial to walk through PNG chunks and reassemble a new file while skipping the chunks you want to remove.
Answer my questions and I'll update my answer with more details...
I wondered whether the following code (part of a PHP file upload) can prevent hiding PHP code (e.g. webshells) in PNGs
You should never need to think about this. If you are worried about people hiding webshells in a file that is uploaded to your server, you are doing something wrong.
For example, serving those files through the PHP parser....which is the way a webshell could be invoked to attack a server.
From the Imagick readme file:
5) NEVER directly serve any files that have been uploaded by users directly through PHP, instead either serve them through the webserver, without invoking PHP, or use readfile to serve them within PHP.
readfile doesn't execute the file, it just sends it to the end-user without invoking it, and so completely prevents the type of attack you seem to be concerned about.
I need a way to get the biggest 5 images from a generic external webpage.
I know that I can't do this with only ajax ( maybe I am wrong ) due cross-site security.
So I must use php+javascript.
I have just written this PHP code to get all images from external url:
$html = file_get_contents($link);
$dom = new domDocument;
$dom->loadHTML($html);
$dom->preserveWhiteSpace = false;
$images = $dom->getElementsByTagName('img');
foreach ($images as $image) {
echo $image->getAttribute('src');
}
So now what is the fastest way to get only the biggest 5 images of that page ?
With biggest I mean images with highest resolutions.
If you mean "biggest" as in in largest file size, then I think you are somehwat on the right track already. You would just need to find all the images in the source document, then likely make a HEAD request to the server where the image lies to (hopefully) get the file size information from the headers without downloading the file.
If "fastest" really is your concern, you could use cURL which has "multi" support for making parallel requests. Once you get the header information from the requests, you can determine the 5 biggest files and display the URL to them.
If the URL you are calling doesn't change much, you could probably cache the results locally to prevent the need to parse through the page and/or make HEAD requests on the images.
If "biggest" as in largest image size, then you are likely going to need to inspect the images on your server using an image library.
What is the fastest way to get images from external webpage?
With any method that you use, the network connection is by far your limiting factor. It makes no sense to optimize.
I need a way to get the biggest 5 images from a generic external webpage.
A HTTP HEAD request should give you information about how many bytes need to be transfered to download the image. The response to a HEAD request should be the HTTP header, that would have been sent if it where a GET request. Especially the HTTP body (which contains the actual image data) is omitted. Notice the word should instead of the (IMHO more preferable) word must.
Furthermore, the number of bytes is not an adequate measure for the number of pixels in the image. You might employ some heuristics based on the contant type (PNG has a different size than GIF has a different size than JPEG for the same number of pixels). I don't know if this is accurate enough for you. For example JPEG images can vary widely due to different compression levels.
Is it bad practise to retrieve images this way? I have a page to call this script about 100 times (there are 100 images). Can i cause server overload or too many http requests or something? I have problems with the server and i dont know if this is causing it :(
// SET THE CONTENT TYPE HEADER
header('Content-type: image/jpeg');
// GET THE IMAGE TO DISPLAY
$image = imagecreatefromjpeg( '../path/to/image/' . $_SESSION[ID] . '/thumbnail/' . $_GET[image]);
// OUTPUT IMAGE AND FREE MEMORY
imagejpeg($image);
imagedestroy($image);
I call the script from regular tags. The reason I call them through PHP is that the images are private to the user.
All help greatly appreciated!!
With this, you are :
Reading the content of a file
Evaluating that content to an in-memory image
Re-rendering that image
If you just want to send an image (that you have on disk) to your users, why not just use readfile(), like this :
header('Content-type: image/jpeg');
readfile('../path/to/image/' . $_SESSION[ID] . '/thumbnail/' . $_GET[image]);
With that, you'll just :
Read the file
and send its content
Without evaluating it to an image -- eliminating some useless computations in the process.
As a sidenote : you should not use $_GET[image] like that in your path : you must make sure no malicious data is injected via that parameter !
Else, anyone will potentially be able to access any possible file on your server... they just have to specify some relative path in the image parameter...
Yes, it's very bad. You're decoding a .jpg into a memory-based bitmap (which is "huge" compared to the original binary .jpg. You then recompress the bitmap into a jpeg.
So you're wasting a ton of
a) memory
b) CPU time
c) losing even more image quality, because jpg is a lossy format.
why not just do:
<?php
header('Content-type: text/jpeg');
readfile('/path/to/your/image.jpg');
instead?
To answer two particular questions from your question
Can i cause server overload or too many http requests or something?
yes, of course.
by both numerous HTTP requests and image processing.
You have to reduce number of images and implement some pagination to show images in smaller packs.
You may also implement some Conditional GET functionality to reduce bandwidth and load.
If things continue getting bad, and you have some resources to dispose, consider to install some content distribution proxy. nginx with X-Accel-Redirect header is a common example
I have problems with the server and i dont know if this is causing it :(
You shouldn't shoot in the dark then. Profile your site first.
I'm creating something that includes a file upload service of sorts, and I need to store data compressed with zlib's compress() function. I send it across the internet already compressed, but I need to know the uncompressed file size on the remote server. Is there any way I can figure out this information without uncompress()ing the data on the server first, just for efficiency? That's how I'm doing it now, but if there's a shortcut I'd love to take it.
By the way, why is it called uncompress? That sounds pretty terrible to me, I always thought it would be decompress...
I doubt it. I don't believe this is something the underlying zlib libraries provide from memory (although it's been a good 7 or 8 years since I used it, the up-to-date docs don't seem to indicate this feature has been added).
One possibility would be to transfer another file which contained the uncompressed size (e.g., transfer both file.zip and file.zip.size) but that seems fraught with danger, especially if you get the size wrong.
Another alternative is, if the server uncompressing is time-expensive but doesn't have to be done immediately, to do it in a lower-priority background task (like with nice under Linux). But again, there may be drawbacks if the size checker starts running behind (too many uploads coming in).
And I tend to think of decompression in terms of "explosive decompression", not a good term to use :-)
If you're uploading using the raw 'compress' format, then you won't have information on the size of the data that's being uploaded. Pax is correct in this regard.
You can store it as a 4 byte header at the start of the compression buffer - assuming that the file size doesn't exceed 4GB.
some C code as an example:
uint8_t *compressBuffer = calloc(bufsize + sizeof (uLongf), 0);
uLongf compressedSize = bufsize;
*((uLongf *)compressBuffer) = filesize;
compress(compressBuffer + sizeof (uLongf), &compressedSize, sourceBuffer, bufsize);
Then you send the complete compressBuffer of the size compressedSize + sizeof (uLongf). When you receive it on the server side you can use the following code to get the data back:
// data is in compressBuffer, assume you already know compressed size.
uLongf originalSize = *((uLongf *)compressBuffer);
uint8_t *realCompressBuffer = compressBuffer + sizeof (uLongf);
If you don't trust the client to send the correct size then you will need to perform some sort of uncompressed data check on the server size. The suggestion of using uncompress to /dev/null is a reasonable one.
If you're uploading a .zip file, it contains a directory which tells you the size of the file when it's uncompressed. This information is built into the file format, again, though this is subject to malicious clients.
The zlib format doesn't have a field for the original input size, so I doubt you will be able to do that without simulating a decompression of the data. The gzip format has a "input size" (ISIZE) field, that you could use, but maybe you want to avoid changing the compression format or having the clients sending the file size.
But even if you use a different format, if you don't trust the clients you would still need to run a more expensive check to make sure the uncompressed data is the size the client says it is. In this case, what you can do is to make the uncompress-to-/dev/null process less expensive, making sure zlib doesn't write the output data anywhere, as you just want to know the uncompressed size.
So I am working on something in php where I have to get my images from a sql database where they will be encoded in base64. The speed of displaying these images is critical so I am trying to figure out if it would be faster turn the database data into an image file and then load it in the browser, or just echo the raw base64 data and use:
<img src="data:image/jpeg;base64,/9j/4AAQ..." />
Which is supported in FireFox and other Gecko browsers.
So to recap, would it be faster to transfer an actual image file or the base64 code. Would it require less http request when using ajax to load the images?
The images would be no more than 100 pixels total.
Base64 encoding makes the file bigger and therefore slower to transfer.
By including the image in the page, it has to be downloaded every time. External images are normally only downloaded once and then cached by the browser.
It isn't compatible with all browsers
Well I don't agree with anyone of you. There are cases when you've to load more and more images. Not all the pages contain 3 images at all. Actually I'm working on a site where you've to load more than 200 images. What happens when 100000 users request that 200 images on a very loaded site. The disks of the server, returning the images should collapse. Even worse you've to make so much request to the server instead of one with base64. For so much thumbnails I'd prefer the base64 representation, pre-saved in the database. I found the solution and a strong argumentation at http://www.stoimen.com/2009/04/23/when-you-should-use-base64-for-images/. The guy is really in that case and made some tests. I was impressed and make my tests as well. The reality is like it says. For so much images loaded in one page the one response from the server is really helpful.
Why regenerate the image again and again if it will not be modified. Hypothetically, even if there are a 1000 different possible images to be shown based on 1000 different conditions, I still think that 1000 images on the disks are better. Remember, disk based images can be cached by the browser and save bandwidth etc etc.
It's a very fast and easy solution. Although the image size will increase about 33% in size, using base64 will reduce significantly the number of http requests.
Google images and Yahoo images are using base64 and serving images inline. Check source code and you'll see it.
Of course there are drawbacks on this approach, but I believe the benefits outweighs the costs.
A cons I have found is in slow devices. For example, In iPhone 3GS the images served by google images are very slow to render, since the images come gziped from the server and must be uncompressed in the browser. So, if the customer has a slow device, he will suffer a little when rendering the images.
To answer the initial question, I ran a test measuring a jpeg image 400x300 px in 96 ppi:
base64ImageData.Length
177732
bitmap.Length
129882
I have used base64 images once or twice for icons (10x10 pixels or so).
Base64 images pros:
compact - you have single file. also if file is compressed, base64 image is compressed almost to the size of normal image.
page is retrieved in single request.
Base64 images cons:
to be realistic, you probably need to use scripting engine (such PHP) on all pages that contains the image.
if image is changed, all cached pages must be re-downloaded.
because image is inline, you can not use CDN or static content web server.
Normal images pros:
if you are use SPDY protocol, at least theoretical, page + images + CSS will load with single request too.
you can set expiration on the image, so content will be cached from the browsers.
Don't think data:// works in IE7 or below.
When an image is requested you could save it to the filesystem then serve that from then on. If the image data in the database changes then just delete the file. Serve it from another domain too like img.domain.com. You can get all the benefits of last-modified, or e-tags for free from your webserver without having to start up PHP unless you need too.
If you're using apache:
# If the file doesn't exist:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^/(image123).jpg$ makeimage.php?image=$1
Generally, using base64 encoding is going to increase the byte size by about 1/3. Because of that, you are going to have to move 1/3 bytes from the database into the server, and then move those extra same 1/3 bytes over the wire to the browser.
Of course, as the size of the image grows, the overhead mentioned will increase proportionately.
That being said, I think it is a good idea to change the files into their byte representations in the db, and transmit those.
To answer the OP Question.
As static files, directly via disk thru web server.
at only 100px they are ideally suited to in memory caching by the Web server.
There is a plethora of info ,caching strategies, configs, how-to's for just about every web server out there.
Infact - The best option in terms of user experience (the image speed you refer to) is to use a CDN capable object store. period.
The "DB" as static storage choice is simply expensive - in terms of all the overhead processing, the burden on the DB, as well as financially, and in terms of technical debt.
A few things, from several answers
Google images and Yahoo images are using base64 and serving images
inline. Check source code and you'll see it.
No. They absolutely do NOT. Images are mostly served from a static file "web server" Specfically gstatic.com:
e.g. https://ssl.gstatic.com/gb/images/p1_2446527d.png
compact - you have single file. also if file is compressed, base64
image is compressed almost to the size of normal image.
So actually, No advantage at all, plus the processing needed to compress?
page is retrieved in single request.
Again, multiple parallel requests as opposed to a single larger load.
What happens when 100000 users request that 200 images on a very
loaded site. The disks of the server, returning the images should
collapse.
You will still be sending The same amount of data, but having a Longer connection time, as well as stressing your database. Secondly the odds of a run of the mill site having 100000 concurrent connections... and even if so, if you are running this all of a single server you are a foolish admin.
By storing the images - binary blobs or base64 in the DB, all you are doing it adding huge overhead to the DB. Either, you have masses and masses of RAM, or your query via the DB will come off the disk anyway.
And, if you DID have such unlimited RAM, then serving the bin images off a Ramdisk - ideally via an alternative dedicated, lightweight webserver static file & caching optimised, configured on a subdomain, would be the fastest, lightest load possible!
Forward planning? You can only scale up so far, and scaling a DB is expensive (relatively speaking). Again the disks you say will "sp
In such a case, where you are serving 100's of images to 100000 concurrent users, the serving of you images should be the domain of CDN Object store.
If you want the fastest speed, then you should write them to disk when they are uploaded/modified and let the webserver serve static files. Rojoca's suggestions are good, too, since they minimize the invocation of php. An additional benefit of serving from another domain is (most) browsers will issue the requests in parallel.
Barring all that, when you query for the data, check if it was last modified, then write it to disk and serve from there. You'll want to make sure you respect the If-Modified-Since header so you don't transfer data needlessly.
If you can't write to disk, or some other cache, then it would be fastest to store it as binary data in the database and stream it out. Adjusting buffer sizes will help at that point.