Our web-app has a contenteditable div we use for answering questions. Clients can paste images straight to the div (basic feature of 'contenteditable'), which turns pasted images into base64 strings.
We noticed that OSX Chrome handles the base64 decoding (encoding?) differently than other browsers. Our sample image turned into ~220 000 characters on Safari, but Chrome produced almost a million characters of base64 data.
This in turn causes an issue where the POST data is clipped, and only a part of the image is saved. All other content in the POST data that comes after the image are also clipped. The request is otherwise ok, Laravel saves the clipped data like any other and doesn't throw any errors in any logs.
PHP.ini settings should be fine (for example post_max_size=64M, php memory limit = 1024M), is there some settings in Laravel that could cause the clipping?
Related
I am using this file uploader plugin, which uses javascript's FileReader API to read files and put them in input elements as base64 strings. Those files could be up to 5mb, so the base64 strings could become quite long.
Anyway, at first everything seems to be working correctly: I can select a file and inspect my hidden input's content, and the base64 string is equal to what I can obtain by using the base64 command on my linux machine: base64 file.pdf > file.b64.
The problem is that, when I post the form, the string gets truncated after 524261 characters, missing the last 50000 chars (more or less). Which means that the file is corrupt.
I have tried changing some php setting (through the .htaccess file), but it's still not working, and honestly I can't figure out what the problem could be...
upload_max_filesize = 10M
post_max_size = 10M
So, the problem was actually chrome (I suppose some other browsers could have the same issue too). I have solved it by using a textarea instead of an input. Since the fileUploader plugin that I was using doesn't support textareas instead of inputs for the file content, I will probably do a pull request with the fix. Thank you gre_gor for pointing out the browser issue, thank you all for your help.
I'm currently experiencing a weird problem while I converted a web application to ODBC using PostgreSQL (coming from MySQL with PHP mysqli-connector).
I noticed that images that are stored as a bytea in PostgreSQL database and thrown into PHP's base64 function are not shown correctly. At some point it's cut off after a couple of lines. This cut off is with all bytea image data that is stored in our database, we have that for logos and signatures.
If you inspect the img tag with the browser's inspector you'll see (at least in Chrome) that there is a lot data of that image missing.
What I do is a SELECT * FROM table and then in a for-loop encode the image as base64:
$clients[$i]['logo'] = base64_encode($clients[$i]['image']);
$clients[$i]['image'] = this is the bytea in the database and
$clients[$i]['logo'] = this is the base64 String that I display in a Smarty template like this: data:image/png;base64,{$client.logo}
I hope you can help.
The solution is the length of data in the odbc.ini files. If the length is limited, base64 strings that are too long will be cut off. Just needed to increase the size.
I've noticed the following behaviour in Chrome and Firefox on Ubuntu Linux and Windows 8:
A greyscale image containing text is perfectly legible when viewed in a picture editor, but browsers somehow make the greys much paler than the image is when viewed in them (Chrome, Firefox)
Does anybody have an idea if there is a way to stop this or control it, perhaps via Javascript, server headers, etc? Is there some sort of encoding in the images themselves which is not clearly setting the correct gray tone to use?
Not sure when this started happening, but this image is from a website that has perhaps a million such images, and there never used to be a problem.
The reason turns out to be connected with missing metadata stored in the png files which meant the images were not being assigned a correct default ICC colour profile. Older browsers didn't bother with colour profiles, which in this case meant they actually displayed the images "properly" in the past, but new browsers displayed faint images.
With pngs it is possible to set a single byte property in the image metadata which instructs image renderers to use the sRGB ICC image colour profile.
UPDATE:
Thanks to danack for pointing out the correct way of doing this withour resorting to using exec() as I did in my original fix:
$image->transformImageColorspace(Imagick::COLORSPACE_SRGB);
I should also note that contrary to what this page says the setImageColorspace() method does work as intended, as its function is not to actually make changes to the file colorspace, but only to set the colorspace that the image claims to be in (aka "sets the colorspace member of the Image structure" according to the ImageMagick API documentation). transformImageColorSpace() in contrast actually makes the required changes to make an image display as that colorspace.
I'm facing some problems, where i do not even have an idea how to get rid of those.
We moved our wordpress blog to a new server (Centos6, newest version of php, mysql).
We now noticed that a colleague just pasted many pictures into the wordpress editor, so that simply the base64 code is inserted into the article. This worked on the old server, but not on the new one.
If its an jpeg, it works like it used to. But if it's an png, all attributes are quotet with primes (”) and even the following normal elements are not working anymore.
This is how an element looks in the debugger:
<img class="”aligncenter”" alt="”"" src="”data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAl...”">
I checked my first ideas like mime types, but nothing helped. Then I updated php on the old server - There it still works.
The encoding of the files seem to be us-ascii (same on the old server).
The funny thing is - it works in the backend, but not in the frontend.
I have no idea. Any hints appreciated,
Best,
Kddc
I have a script that gets the raw binary image data via url request. It then takes the data and puts it into mysql.
Pretty simple right? Well It's I'm inserting some 8,000 decent sized 600x400 jpegs and for some odd reason some of the images are getting cut off. Maybe the part of my script that iterates through each image it needs to get is going to fast?
When I do a straight request to the URL I can see all the raw image data, but on my end, the data is cut off some way down the line.
Any ides why?
Is something in the chain treating the binary data as a string, in particular a C style null-terminated string? That could cause it to get cut off at the first null byte ('\0').
Have you tried simply call your script that pulls the binary image, and dump it out. If you see the image correctly then its not pulling part, might be something to do with inserting.
Are you setting the headers correctly?
ie:
header('Content-Length: '.strlen($imagedata));
header('Content-Type: image/png');
...
A string datatype would definitely not be the optimum for storing images in a DB.
In fact I've seen several recommendations that the image should go in a folder somewhere in your filesystem and the DB contains only the address/file path.
This is a link to a page about inserting images.
It contains the suggestion about the filepath and that a blob datatype is better if the images must go in the database.
If it's a blob, then treating it as a string won't work.
If you make repeated requests to the same url, does the image eventually load?
If so that points to a networking issue. Large packet support is enabled in your kernal (assuming linux) which doesn't work correctly for a lot of windows clients. I've seen a similar issue with large(1+MB) javascript libraries served from a linux machine.
http://en.wikipedia.org/wiki/TCP_window_scale_option
http://support.microsoft.com/kb/314053