I want to create collage of photos with PHP. I created a test code on my localhost as follows:
<style type="text/css">
body{background:url('images/Winter.jpg'); }
#collage:after{content:"";clear:both;display:block;}
}
</style>
<?php
ob_start();
$dir = "images";
if($fp = opendir($dir))
{
while($file = readdir($fp))
{
if('jpg'==(pathinfo($file, PATHINFO_EXTENSION)))
{
$style = "style='float:left; -webkit-transform:rotate(".mt_rand(2, 30)."deg); border:solid 5px #eee;'";
$ht = "height='".mt_rand(100, 300)."'";
echo "<div class='img_div' $style>";
echo "<img src='$dir/$file' $ht >";
echo "<div style='background:#eee;font-size:20px;'>hi</div>";
echo '</div>';
}
}
}
closedir($fp);
?>
It generated the output which I want but now I want the user to be able to download it as an image file. How do I do it?
If you want that you will have to create a new blank image using something like php's GD library and position your images into that instead of printing and positioning them using html/css
(I believe I read somewhere that in the future you will be able to display html/css on a html5 canvas, if that's true you'd be able to extract an image using that, but for now I don't think that will be possible)
Short of using the new HTML5 canvas feature, that will let the browser do all the work of merging the mosaic into one single image for you (but is it supported by many browsers yet ? I honestly don't know.),
you can also create that image on the server's side, using php's GD library. That was, I think, the only way before HTML5 and may still be the best way for a while (as I said, I don't know exactly what HTML5 implementations are worth to this day)
A good place to start would be there: http://php.net/manual/en/ref.image.php (read the many examples by visitors, on that page and on the function-specific pages, they can be a valuable education).
Now, this method has one inconvenient (in addition to you having to do the whole work): it taxes your server's CPU and slows down its response. That's ok if you have only a few visitors at a time. But otherwise, your collage should be pre-calculated once and for all, so the server doesn't have to redo the job for each visit.
Related
I am using LibChart. It works fine, however, i have an issue. I'm not sure it is directly linked to the library, more of a general php/image thing that i am maybe missing.
Thing is, the image doesn't update itself (the image that contains the graph) unless i re-upload the PHP file that draws it. This is how the graph is made, using LibChart
$chart->render("generated/demo4.png");
That renders it into that file, and i display it using
<img alt="Line chart" src="generated/demo4.png" style="border: 1px solid gray; float: right;"/>
It works great, but only the first time it is drawn. It won't redraw the image unless i re-upload the file that draws it. That is kinda bad, since it draws data from a database, and when that changes it needs to be reflected in the graph.
What might be the issue?
How can i redraw the image without re-uploading the file?
Which internet browser are you using?
Some of them (for example IE) are storing images in internal memory (for faster rendering the web site). You can save the rendered graph in file, then load it again in variable and finaly this variable show as base64 data.
$path = "tmp/graph.png";
$chart->render($path);
$type = pathinfo($path, PATHINFO_EXTENSION);
$data = file_get_contents($path);
return "<img src=\"data:image/".$type.";base64,".base64_encode($data)."\" />";
It is a bit weird, but it worked for me. Now I am looking for some better solution.
I am new in web development, i need to create a website for portfolio purposes in which i need to show thumbnail and a little description of my project underneath it and all content should be dynamically displayed. I have basic knowledege of PHP, wordpress, javascript, jquery, python, and HTML/CSS so this i am doing for learning purpose.
I want you guys to please tell me the way how can i do it, just guide me rest of it i will handle.
Some similar examples are
http://themes.themepunch.com/?theme=megafoliopro_jq
http://codecanyon.net/item/dzs-scroller-gallery-cool-jquery-media-gallery/full_screen_preview/457913?ref=ibrandstudio
I am new to this forum and expect someone will answer my question
Thanks a lot mates.
Download CMS Made Simple and get either the album or gallery module. This is one out of many ways to get the job done.
You could glob() a directory to generate your thumbnails using PHP. I use this method on my photography site:
<?php
$counter = 0; // Set counter to 0
foreach (glob("images/photo-strip/thumb/*.jpg") as $pathToThumb) { // Grab files from the thumbnail path and save them to variable
$th_filename = basename($pathToThumb); // Strip the thumbnail filename from the path
$filename = str_replace('th_', '', $th_filename); // Strip th_ from the filename and save it to a different variable
$pathToFull = 'images/photo-strip/' . $filename; // Rebuild the path to the full-size image
echo ("<section class=\"photo-box\"><img src=\"$pathToThumb\" /></section>"); // Echo the photobox
$counter++; // Increment counter by 1 until no file exists
}
?>
You could expand on this code some in order to generate your "captions" perhaps even style your captions off a title="" property inside the <img> tag. How you match those captions up to the file is up to you.
I've built a gallery using ci where the image that is uploaded is kept its same size as long as its within 3000x5000 px range. Upon displaying them and since i haven't cropped thumbnails , how can I re size them when needed so say i want to re-size a list of them as 150x150 how would i go about this?
i followed the guide
http://ellislab.com/codeigniter/user-guide/libraries/image_lib.html
Problem is when loading the library it wants me to specify each images complete config information.
So say I loaded the variable into a controller to be displayed in a view, and when loading it would look like this:
foreach($gallery as $img)
{
echo "<div>";
echo "<img src='" . $this->img_lib->resize($img->imagepath, 150, 150) . "'>";
echo "</div>";
}
ps: does the image gets saved when its resized? because i dont want that.
better solution: http://www.matmoo.com/digital-dribble/codeigniter/image_moo/
I know how to do it outside of ci. Essentially the img "src" needs to equal a php script OR be intercepted by htaccess which then redirects it to a php script.
You could try something like this:
foreach($gallery as $img)
{
echo "<div>";
echo "<img src='/imageResize.php?path=" . $img->imagepath . "'>";
echo "</div>";
}
File: imageResize.php
$path = $_REQUEST['path'];
$this->img_lib->resize($path, 150, 150);
Depending on the folder structure of your site it would be better to just pass the filename rather than the full path. Even better is to use .htaccess (which I do in our CMS) to intercept the image and resize/crop on the fly. e.g. <img src="/path/to/image.jpg?w=150&h=150" />
Does that help provide some direction?
On your other question, a physical file isn't saved on the server unless you specify it to do so.
The answer from #SaRiD is on the right track. However, it doesn't necessarily have to be outside of CI. You can point the image source to a controller method that takes care of the resizing and serve the image.
You also need to use the correct headers to identify the image resource to the browser, within this method.
You state that you do not want to save the thumbnail. This obviously depend on the need of the application. You can set it in a way that it serves the previously cropped and saved thumbnails to the browser if that exists (from a previous request), rather than creating and serving thumbnails each time - it will save you some CPU cycles.
I'm building a site that depends on bookmarklets. These bookmarklets pull the URL and a couple of other elements. However, I need to select 1 image from the page the user bookmarks. Currently I'm trying to use the PHP Simple HTML DOM Parser http://simplehtmldom.sourceforge.net/
It pulls the HTML as expected, and returns the tags as expected. However, I want to take this a step further and only return images with a min width of 40px. I know about the function getimagesize() but from what I understand, this is resource heavy. Is there a better method available to pre-process the image and achieve the results I'm looking for?
Thanks!
First check if the image HTML tag has a width attribute. If it's above 40, skip over it. As Matthew mentioned, it will get false positives where people sized down a large image to 40px wide, but that's no big deal; the point of this step is to quickly weed out the first dozen or so images that are obviously too big.
Once the script catches an image that SAYS it's under 40px wide, check the header information to deduce a general width based on the size of the file. This is faster than getimagesize because you don't have to download the image to get the info.
function get_image_kb($path) {
$headers = get_headers($path);
$len = explode(" ",$headers[6]);
return $len[1];
}
$imageKb = get_image_kb('test1.jpg');
// I'm going to gander 40x80 is about 2000kb
$cutoffSize = 2000;
if ($imageKb < $cutoffSize) {
// this is the one!
}
else {
// it was a phoney, keep scraping
}
Setting it at 2000kb will also let through images that are 100x30, which isn't good.
However, at this point, you've weeded out most of the huge 800kb files that would really slow you down, and because we know it's under 2kb, it's not too taxing to test this one with getimagesize() to get an accurate width.
You can tweak the process depending on how picky you are for the 40px mark, as usual higher accuracy takes more time, and vice versa.
I've developed an image-scraping mechanism in PHP+JS that allows a user to share URLs and get a rendered preview (very much like Facebook's previewer when you share links). However, the whole process sometimes gets slow or sometimes fetches wrong images, so in general, I'd like to know how to improve it, especially its speed and accuracy. Stuff like parsing the DOM faster or getting image sizes faster. Here's the process I'm using, for those who want to know more:
A. Get the HTML of the page using PHP (I actually use one of CakePHP's classes, which in turn use fwrite and fread to fetch the HTML. I wonder if cURL would be significantly better).
B. Parse the HTML using DOMDocument to get the img tags, while also filtering out any "image" that is not a png, jpg, or gif (you know, sometimes people place tracking scripts inside img tags).
$DOM = new DOMDocument();
#$DOM->loadHTML($html); //$html here is a string returned from step A
$images = $DOM->getElementsByTagName('img');
$imagesSRCs = array();
foreach ($images as $image) {
$src = trim($image->getAttribute('src'));
if (!preg_match('/\.(jpeg|jpg|png|gif)/', $src)) {
continue;
}
$src = urldecode($src);
$src = url_to_absolute($url, $src); //custom function; $url is the link shared
$imagesSRCs[] = $src;
}
$imagesSRCs = array_unique($imagesSRCs); // eliminates copies of a same image
C. Send an array with all those image tags to a page which processes using Javascript (specifically, JQuery). This processing consists mostly in discarding images that are less than 80pixels (so I dont get blank gifs, hundreds of tiny icons, etc.). Because it must calculate each image size, I decided to use JS instead of PHP's getimagesize() because it was insanely slow. Thus, as the images get loaded by the browser, it does the following:
$('.fetchedThumb').load(function() {
$smallestDim = Math.min(this.width, this.height);
if ($smallestDim < 80) {
$(this).parent().parent().remove(); //removes container divs and below
}
});
Rather than downloading the content like this, why not create a server-side component that uses something like wkhtmltoimage or PhantomJS to render an image of the page, and then just scale the image down to a preview size.
This is exactly why I made jQueryScrape
It's a very lightweight jQuery plugin + PHP proxy that lets you scrape remote pages asynchronously, and it's blazing fast. That demo I linked above goes to around 8 different sites and pulls in tons of content, usually in less than 2 seconds.
The biggest bottleneck when scraping with PHP is that PHP will try to download all referenced content (meaning images) as soon as you try to parse anything server side. To avoid this, the proxy in jQueryScrape actually breaks image tags on the server before sending it to the client (by changing all img tags to span tags.)
The jQuery plugin then provides a span2img method that converts those span tags back to images, so the downloading of images is left to the browser and happens as the content is rendered. You can at that point use the result as a normal jQuery object for parsing and rendering selections of the remote content. See the github page for basic usage.