I'm encrypting imagea into base64 then storing them in the database. I then print the images using PHP, but sometimes I get a corrupted image. If I put the same code in an HTML file, or if I refresh the page many times, then it works.
This is my corrupted image:
My HTML looks like this:
<img src="data:image/png;_encrypteddata_" />
NOTE: _encrypteddata_ is my encrypted image (I cannot post that huge data here)
It works fine, but sometimes it shows a continuously corrupted image with the same data. Is it having problem with browser or base64?
I'm using image/png for all the icons. Would that cause any problems?
I think it's coming from the browser.
NB :Retreiving image data from the database on every page load can be slow.
Try writing an image file on your filesystem with the data and link to this file in your HTML. It will be faster and more robust.
Basically base64 get much memory for storing encrypted data especially images so whenever you get that little bit huge data from database it takes some times to be loaded . sometimes browser does not wait much time to decrypt so you could see corrupted images.
You better to store your images in the local file system rather than storing into the database. it will speed up your process.
use this code for store your data into local file system
function get_image($image_url, $localPathToStore)
{
echo $url . "<br>" . $saveto;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
$raw = curl_exec($ch);
curl_close($ch);
if (file_exists($saveto)) {
unlink($saveto);
}
$fp = fopen($saveto, 'x');
fwrite($fp, $raw);
fclose($fp);
}
I got this solution
<img src='data:image/;base64,_encrypteddata_'/>
From http://www.kingpabel.com/php-base64_encode/
Related
I'm trying to retrieve a remote image, but when I view the image (either output straight to browser or saved to disk and viewed) the quality is always lower.
Is there a way to do this without quality loss? Here are the methods I've used, but with the same result.
$imagePath is a path like http://www.example.com/myimage.jpg and $filePath is where the image will be saved to.
curl
$ch = curl_init($imagePath);
$fp = fopen($filePath, 'wb');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
file_get_contents
$tmpImage = file_get_contents($imagePath);
file_put_contents($filePath, $tmpImage);
I'll get a screenshot to show the quality issues shortly.
If you look around the KIA logo you can see of the left the quality issues I'm having.
Update:
Upon some further testing with different images it seems the quality issues are different with each image. Some has no issues at all.
Also the image which the screenshots are from above, based on the long url to the image, it seems like the image has already been resized and had it's quality alter before it gets to my script, so I'm thinking that could account for these issues too.
I want to server begins to download a big file. But while this file is downloading output the file content to the user. I tried this code:
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_TIMEOUT, 155000);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$response = curl_exec($ch); // get curl response
echo $response;
But this code takes a long time. I want to use curl instead of readfile.
See this answer: Manipulate a string that is 30 million characters long
Modifing the MyStream class should change it enough so that you can just echo the results to the browser. Assuming the browser is already downloading the file, it should just keep downloading it.
I'm saving and resizing tons of images from a remote server using cURL in PHP, but the previous developer doesn't implemented any kind of system for make the user uploaded pictures "safe", so the users were able to upload any kind of file format with any kind of name then the uploaded files name is saved to a database as is, so lots of files have non URL safe and iso-8859-8 characters. For example:
gaght101125659_5 Gn-eŐs mtó gÁrlós.jpg
According to this answer I made a code for getting the pictures.
private function getRemoteUrl($file)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $file);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$img = imagecreatefromstring($this->getRemoteUrl($url));//$url contains the plain url of the image.
This code works perfectly for images with a name like gaht123115516_2.jpg but for the example shown above it gives an error saying:
Warning: imagecreatefromstring() [function.imagecreatefromstring]: Data is not in a recognized format in /home/test/public_html/resizing.php on line 64
Of course, because of the fancy characters. So what should I do in getRemoteUrl($file) with the $file variable to make it work? Can this be done in PHP? If not what other methods/programming languages should I try?
Use urlencode() on the filename. don't urlencode() the entire url.
URL encode the specified filename.
I've written a script that searches through exiting legal case dockets for things like "motion to intervene" and "motion to compel". If the regular expression returns true, then it looks to see if there is a scanned image of the document online for public use. That image is a TIFF file, but not an ordinary tiff file. Here is a link to an example of what I'm trying to copy to my own server.
http://www.oscn.net/applications/oscn/getimage.tif?submitted=true&casemasterid=2565129&db=OKLAHOMA&barcode=1012443256
Here is the error you get if you only try to look at the http://www.oscn.net/applications/oscn/getimage.tif
It is a TIFF file but dynamic. I've used the fopen(), CURL, etc without success. I've used these types of functions with JPG images from random sites just to check to make sure that my server allowed this type of stuff and it worked.
I don't have PDFlib installed on the server (I checked the PEAR and it's not available there either, though I'm not 100% sure that is where it would be.) My host uses cPanel. The server is running Apache. I'm not sure where else to look for a solution to this problem.
I've seen some solutions that used PDFlib but each of those grabbed a normal TIFF image, not one that was dynamically created. My thought though is that it shouldn't matter if I can get the image data to stream, shouldn't I be able to use fopen() and write or buffer that data into my own .tif file?
Thanks for any input and Happy Thanksgiving!
UPDATE: The issue wasn't with CURL, it was with the URL I scraped to pass to CURL. When I printed the $url to the screen, it looked right, but it wasn't. Somewhere & was turned into &, which then threw off CURL because it was fetching an invalid URL (invalid at least according to the remote server where the TIF file is).
For those of you finding this later, here is the script that works perfectly.
//*******************************************************************************
$url = 'http://www.oscn.net/applications/oscn/getimage.tif"
$url .= '?submitted=true&casemasterid=2565129&db=OKLAHOMA&barcode=1016063497';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url); // set the url
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // get the transfer as a string, rather than output it directly
print "Attempting to fetch file...\n";
$img = curl_exec($ch); // get the image
//I used the time() so that in testing I would know when a new file was created rather than always overwriting the old file. This will be changed for final version
if($img){
$fh = fopen('oscn_docs/' . time(). '.tif', 'w'); // this will simply overwrite the file. If that's not what you want to do, you'll have to change the 'w' argument!
if($fh){
$byteswritten = fwrite($fh, $img);
fclose($fh);
}else{
print "Unable to open file.\n";
}
}else{
print "Unable to fetch file.\n";
}
print "Done.\n";
exit(0);
//*******************************************************************************
jarod
For those of you finding this later, here is the script that works perfectly.
//*******************************************************************************
$url = 'http://www.oscn.net/applications/oscn/getimage.tif"
$url .= '?submitted=true&casemasterid=2565129&db=OKLAHOMA&barcode=1016063497';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url); // set the url
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // get the transfer as a string, rather than output it directly
print "Attempting to fetch file...\n";
$img = curl_exec($ch); // get the image
//I used the time() so that in testing I would know when a new file was created rather than always overwriting the old file. This will be changed for final version
if($img){
$fh = fopen('oscn_docs/' . time(). '.tif', 'w'); // this will simply overwrite the file. If that's not what you want to do, you'll have to change the 'w' argument!
if($fh){
$byteswritten = fwrite($fh, $img);
fclose($fh);
}else{
print "Unable to open file.\n";
}
}else{
print "Unable to fetch file.\n";
}
print "Done.\n";
exit(0);
//*******************************************************************************
The following code retrieves an image and saves it to a local folder. A jpg file is indeed saved to local disk, with around 40KB filesize (seems correct). When I put the local path in an img tag, the file does not display.
Firebug > Inspect Element shows a size of 0 X 0 and I'm unable to view the image when saved to my desktop.
file_put_contents, file_get_contents and getimagesize don't return FAILs. $url IS a valid image. The problem is just saving it locally, the file seems to be corrupt - how come?
$url = $image->request_url; //the image generated on the remote server
//print_r(getimagesize($url)); die;
$img = 'thumbalizr/cache/screenshot_' . $row['id'] . '.jpg'; //path to our local cache folder + unique filename
if( !$captured_file = file_get_contents($url) ) die('file could not be retrieved');
elseif( !file_put_contents($img, $captured_file, FILE_APPEND) ) die('file could not be written to local disk'); //write the image to our local cache
"Are you sure the path is correct? Have you tried an absolute path?" YES
"Have you checked that the image is downloaded correctly, perhaps with another utility (e.g. ftp, diff)?" I can download the img via ftp but it does not open on my local computer either.
"What do you get if you call the URL directly in the browser?" FF just prints out the URL instead of showing the image
"Why are you using FILE_APPEND? if the target already exists, this writes to the end, which will naturally give you a corrupt image" I removed FILE_APPEND, no difference
"source and final extension are the same?" Yes I tried with jpg, jpeg and png - no difference
"First of all, example code is wrong. Can't use $capture_file in file_put_content because that variable is not defied becouso of if else if block logic." - WRONG, that code does run!
"Can you look into the image file" - no! Although the file has a realistic file size and I can download it, it's impossible to open it.
First off check the files you're been downloading in a text editor to see if you're getting HTML error pages instead of binary image data.
Second, I would use curl for this as it provides better success/error information. Here's your example modified to use it.
//path to our local cache folder + unique filename
$img_path = 'thumbalizr/cache/screenshot_' . $row['id'] . '.jpg';
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $image->request_url);
curl_setopt($c, CURLOPT_HEADER, 0);
curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($c, CURLOPT_AUTOREFERER, true);
curl_setopt($c, CURLOPT_BINARYTRANSFER, true);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c, CURLOPT_FORBID_REUSE, true);
// curl can automatically write the file out for you
curl_setopt($c, CURLOPT_FILE, $img_path);
// You can get more elaborate success/error info from
// curl_getinfo() http://www.php.net/manual/en/function.curl-getinfo.php
if (!curl_exec($c)) {
die('file could not be retrieved');
}