I have a random image generator for my site. The problem is, it takes a really long time.. I was wondering if anybody could help to speed it up in any ways. The site is http://viralaftermath.com/, and this is the script:
header('Content-type: image/jpeg;');
$images = glob("images/" . '*.{jpg,jpeg,png,gif}', GLOB_BRACE);
echo file_get_contents($images[array_rand($images)]);
This is a pretty resource-intensive way to do this, as you are passing the image data through PHP and not specifying any caching headers, so the image has to be reloaded every single time you open the page.
A much better approach would be to have glob() list the files within the HTML page that you're using to embed the image. Then randomize that list, and emit an <img> tag pointing to the actual file name that you determined randomly.
When you are linking to a static image instead of the PHP script, you also likely benefit from the web server's caching defaults for static resources. (You could use PHP to send caching headers as well, but in this scenario it really makes the most sense to randomly point to static images.)
$images = glob("images/" . '*.{jpg,jpeg,png,gif}', GLOB_BRACE);
# Randomize order
shuffle ($images);
# Create URL
$url = "images/".basename($images[0]);
echo "<img src='$url'>";
Profile your code and find the bottlenecks. I can only make guesses.
echo file_get_contents($file);
This will first read the complete file into memory and then send it to the output buffer. It would be way nicer if the file goes directly into output buffer. readfile() is your friend. It would be even better to avoid buffering completely. ob_end_flush() will help you there.
A next candidate is the image directory. If searching for one image takes a significant time, you have to optimize that. This can be achieved by an index (e.g. with a database).
Related
So my problems stems from trying to generate large PDF files but being hit by the memory limit / execution timeouts in PHP, the data is too great in volume to simply extend these limits so that solution is out of the question.
I have a background shell task running which handles all of this rendering and then alerts the user once the PDF has been completed.
In theory I would have a loop within this shell which would take in a chunk of data and render it to file, then take the next chunk and do the same. Once out of data to render, the file would then be written and completed ready to be served. This way the memory limit of PHP would not be hit as a manageable chunk will only ever be loaded.
I am currently using the CakePDF(v.3.5) plugin for CakePHP 3 (v.3.5.13) but am sturggling to find a solution which allows the rendering of some data and then adding more data to the same pdf.
Has anyone managed this before or is it out of scope of the plugin? Would another solution be to create multiple PDF files and then merge them together after all separate PDF's have been created?
This is more of a theoretical question if this would work and if anyone has managed it before. I don't have much code to show but if more detail is required then give me a shout and I will try and get something for you or some example code!
Thanks
I don't have direct experience with that version CakePdf, but under CakePHP 2.x I use the wkhtmltopdf engine which takes an .html output to produce the PDF.
If your shell generates such .html in chunks, it is easy to append.
Of course wkhtmltopdf is likely to put some load on the machine to produce the PDF, but since it's a binary, it happens outside of PHP's memory/time contraints.
That's certainly out of the scope of the plugin, it's built around the idea of rendering a single view to a single file, the interface doesn't support chunked creation of a single file, and if I'm not mistaken, none of the supported engines do support that either, at least not in a straightforward and efficient manner when it comes to large documents.
There's certainly lots of ways to do this, creating multiple PDFs and merging/concatenating them afterwards might be one of them, generating the source content in chunks, and passing it to a PDF renderer that can handle lots of content efficiently might be another one, and surely there also might be libraries out there that do explicitly support chunked creation of PDFs...
I thought I would post what I ended up doing for anyone in the future.
I used CakePDF to generate smaller PDF's which I stored in a tmp directory these are all under the limit of PHP's execution time and memory limits as I don't believe altering those provides a good solution. In this step I also saved the names of all of the PDF's generated for use in the next step.
The code for this looked something like:
while (!is_last_pdf) {
// Generate pdf in here with a portion of the data
$CakePdf = new CakePdf();
$CakePdf->template('page', 'default');
$CakePdf->viewVars(compact('data', 'other_stuff'));
// Save file name to array
$tmp_file_list[] = $file_name;
// Update the is_last_pdf variable
is_last_pdf = check_for_more_data();
}
From this I used GhostScript from within the Shell to merge all of the PDF files, the code for this looked something like this:
$output_path = 'output.pdf';
$file_list = '';
// Create a string of all the files to merge
foreach ($tmp_file_list as $file) {
$file_list .= $file . ' ';
}
// Execute GhostScript to merge all the files into the `output.pdf` file
exec('gs -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=' . $output_path . ' ' . $file_list);
All of the code here was in the Shell file responsible for creating the PDF.
Hope this helps someone :)
I have a website with images upload/show functionality on it. All images are saved into filesystem on a specific path.
I use Yii2 framework in the project. There isn't straight way to the images and all of them requested by specific URL. ImageController proceses the URL and takes decision about image resizing. ImageModel does the job. The user get image content.
Here the code snippet:
$file = ... // full path to image
...
$ext = pathinfo($file)['extension'];
if (file_exists($file)) {
// return original
return Imagine::getImagine()
->open($file)
->show($ext, []);
}
preg_match("/(.*)_(\d+)x(\d+)\.{$ext}/", $file, $matches);
if (is_array($matches) && count($matches)) {
if (!file_exists("{$matches[1]}.{$ext}")) {
throw new NotFoundHttpException("Image doen't exist!");
}
$options = array(
'resolution-units' => ImageInterface::RESOLUTION_PIXELSPERINCH,
'resolution-x' => $matches[2],
'resolution-y' => $matches[3],
'jpeg_quality' => 100,
);
return Imagine::resize("{$matches[1]}.{$ext}", $matches[2], $matches[3])
->show($ext, $options);
} else {
throw new NotFoundHttpException('Wrong URL params!');
}
We don't discuss data caching in this topic.
So, I wonder about efficient of this approach. Is it ok to return all images by PHP even they aren't changed at all? Will it increase the server load?
Or, maybe, I should save images to another public directory and redirect browser to it? How long does it take to so many redirects on a single page (there are can be plenty images). What about SEO?
I need an advice. What is the best practice to solve such tasks?
You should consider using sendFile() or xSendFile() for sending files - it should be much faster than loading image using Imagine and displaying it by show(). But for that you need to have a final image saved on disk, so we're back to:
We don't discuss data caching in this topic.
Well, this is actually the first thing that you should care about. Sending image by PHP will be significantly less efficient (but still pretty fast, although this may depend on your server configuration) than doing that by webserver. Involving framework into this will be much slower (bootstrapping framework takes time). But this is all irrelevant if you will resize the image on every request - this will be the main bottleneck here.
As long as you're not having some requirement which will make it impossible (like you need to check if the user has rights to see this image before displaying it) I would recommend saving images to public directory and link to them directly (without any redirection). It will save you much pain with handling stuff that webserver already do for static files (handling cache headers, 304 responses etc) and it will be the most efficient solution.
If this is not possible, create a simple PHP file which will only send file to the user without bootstrapping the whole framework.
If you really need the whole framework, use sendFile() or xSendFile() for sending file.
The most important things are:
Do not use Imagine to other things than generating an image thumbnail (which should be generated only once and cached).
Do not link to PHP page which will always only redirect to real image served by webserver. It will not reduce server load comparing to serving image by PHP (you already paid the price of handling request by PHP) and your website will work slower for clients (which may affect SEO) due to additional request required to get actual image.
If you need to serve image by PHP, make sure that you set cache headers and it works well with browser cache - you don't want to download the same images on every website refresh.
If I use php file as source to image, where:
$file = $_GET["file"];
$file_get = get_file_contents("from/".$file);
$fopen = fopen("to/".$file,"w+");
fwrite($fopen, $file_get);
fclose($fopen);
header("Location:to/".$file);
And if I use many images of that kind on one page, like:
<img src="image.php/?file=img.jpg>
<img src="image.php/?file=img2.jpg>
<img src="image.php/?file=img3.jpg>
...
I found that code in image.php doesn't run asynchronously. Images are downloaded one by one. How can I avoid it?
I see there some problems in your code. The first is that you have a big security whole when you use the $_GET input directly in your code to get an image.
The next one is why do you fetch the content from one file and write them to another file to redirect to them? That is not really fast if you write every time the file to another location.
If you get the content echt echo the content and set the correct header to show the image.
header('Content-type:image/png');
readfile($fullpath);
Its much easier and you have a less IO to show files. Otherwise you can use a script like PHPThumb which generated smaller versions and cache the files.
http://phpthumb.sourceforge.net/
I have a networked camera that generates a video snapshot upon hitting http://192.0.0.8/image/jpeg.cgi. The problem is that by accessing the root (i.e. 192.0.0.8) directly, users can access a live video feed, so I hope to hide the address altogether.
My proposed solution is to use PHP to retrieve the image and display it at http://intranet/monitor/view.php. Although users could create motion by hitting this new address repeatedly, I see that as unlikely.
I have tried using include() and readfile() in various ways, but do not really use PHP often enough to understand if I'm going in the right direction. My best attempt to date resulted in outputting the jpeg contents, but I did not save the code long enough to share with you.
Any advice would be appreciated.
If you want to limit requests per user then use this:
$timelimit = 30;//Limit in seconds
if(!isset($_SESSION['last_request_time'])) {
$_SESSION['last_request_time'] = time();
}
if(time() > $_SESSION['last_request_time'] + $timelimit) {
//prepare and serve a new image
} else {
//serve an old image
}
If you want to limit image refresh time then use the same script but save the last_request_time in place shared for all users(DB, file, cache)
A succinct way to do this is as follows:
header('Content-Type: image/jpeg');
readfile('http://192.0.0.8/image/jpeg.cgi');
The content of the jpeg is then streamed back to the browser as a file, directly from http://intranet/monitor/view.php.
I would like to generate a dynamic image from a script, and then have it load to the browser without being persistent on the server.
However, I cannot call this by setting the image's src="script.php", since that would require running the script that just generated the page and its data all over again, just to get the final data that will generate the graph.
Is there a way to do this that is similar to setting image's src="script.php", but which is called from within another script, and just sends the image without saving it? I need access to the data that is used in the generation of the markup, in order to create this dynamic image.
Or, if not, what is the easiest way to destroy the image once the page is loaded? a quick ajax call?
Is there any way to cache certain data for some limited time frame in order for it to be available to some other script?
Any ideas would be greatly appreciated, as I'm having a really hard time finding the right solution to this...
Thanks!
You can inline the image into a <img> tag if you need to.
Like
<?php
$final_image_data; // Your image data, generated by GD
$base64_data = base64_encode($final_image_data);
echo "<img src=\"data:image/png;base64,{$base64_data}\" ... />";
?>
That should work on all modern browsers, and IE8. Doesn't work well with some email clients tho (Outlook, for one).
Also, another solution I found is to store the image in a session variable which is then called from a php script in the image tag. This would allow a user specific image to be served, and then removed from memory by the script... This also avoids messy img src="" tags...
Hopefully that is helpful to someone.
Use a rewrite rule.
RewriteRule ^magicimage.jpg$ /myscript.php
Then simply echo your image data from gd, instead of writing it to disk -- which is as simple as not providing a filename to the appropriate image*() function
myscript.php
<?php
$im = imagecreatetruecolor($w, $h);
//...do gd stuff...
header('Content-type: image/jpeg');
//this outputs the content directly to the browser
//without creating a temporary file or anything
imagejpeg($im);
And finally, utilize the above
display.php
<img src="magicimage.jpg">