I am using this file uploader plugin, which uses javascript's FileReader API to read files and put them in input elements as base64 strings. Those files could be up to 5mb, so the base64 strings could become quite long.
Anyway, at first everything seems to be working correctly: I can select a file and inspect my hidden input's content, and the base64 string is equal to what I can obtain by using the base64 command on my linux machine: base64 file.pdf > file.b64.
The problem is that, when I post the form, the string gets truncated after 524261 characters, missing the last 50000 chars (more or less). Which means that the file is corrupt.
I have tried changing some php setting (through the .htaccess file), but it's still not working, and honestly I can't figure out what the problem could be...
upload_max_filesize = 10M
post_max_size = 10M
So, the problem was actually chrome (I suppose some other browsers could have the same issue too). I have solved it by using a textarea instead of an input. Since the fileUploader plugin that I was using doesn't support textareas instead of inputs for the file content, I will probably do a pull request with the fix. Thank you gre_gor for pointing out the browser issue, thank you all for your help.
Related
Our web-app has a contenteditable div we use for answering questions. Clients can paste images straight to the div (basic feature of 'contenteditable'), which turns pasted images into base64 strings.
We noticed that OSX Chrome handles the base64 decoding (encoding?) differently than other browsers. Our sample image turned into ~220 000 characters on Safari, but Chrome produced almost a million characters of base64 data.
This in turn causes an issue where the POST data is clipped, and only a part of the image is saved. All other content in the POST data that comes after the image are also clipped. The request is otherwise ok, Laravel saves the clipped data like any other and doesn't throw any errors in any logs.
PHP.ini settings should be fine (for example post_max_size=64M, php memory limit = 1024M), is there some settings in Laravel that could cause the clipping?
I have a picture. For whatever reason, I need that picture to be sent to an environment that can only receive text and not images. Images and other files must be sent through their filter and I want to get around this. I calculated that there would be 480,000 independent hex values being manipulated but this is really the only option I have. Also, is it possible to compress and uncompress it for less pixels being sent? I will need to send the picture from a PHP web server [lets say, mysite.com/image.php] and receive it in Lua, and my only connection to the server is over a web request. No ftp, no even loading image files. Just setting 480,000 variables to the different id's
Oh, one more thing: it needs to not crash my server when I run it. ;)
Convert your image to base64 (Eg: Can pass to the variable).
Eg: I converted PNG image
Base 64 image will look like this.
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAcAAAAHCAYAAADEUlfTAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAE9JREFUeNpiYMADGLEJKssrCACp+Uw4JPYD8QdGHBIP7j58EMgCFDAAcvqBOBGI64FYAMpmYIFqAilYD6Udgbo+IBvXAMT/gXg9sjUAAQYAG6IS47QjgzEAAAAASUVORK5CYII="
You can use it in image source to display.
Hope this helps!
I'm currently experiencing a weird problem while I converted a web application to ODBC using PostgreSQL (coming from MySQL with PHP mysqli-connector).
I noticed that images that are stored as a bytea in PostgreSQL database and thrown into PHP's base64 function are not shown correctly. At some point it's cut off after a couple of lines. This cut off is with all bytea image data that is stored in our database, we have that for logos and signatures.
If you inspect the img tag with the browser's inspector you'll see (at least in Chrome) that there is a lot data of that image missing.
What I do is a SELECT * FROM table and then in a for-loop encode the image as base64:
$clients[$i]['logo'] = base64_encode($clients[$i]['image']);
$clients[$i]['image'] = this is the bytea in the database and
$clients[$i]['logo'] = this is the base64 String that I display in a Smarty template like this: data:image/png;base64,{$client.logo}
I hope you can help.
The solution is the length of data in the odbc.ini files. If the length is limited, base64 strings that are too long will be cut off. Just needed to increase the size.
my problem is with the pdf opened in Acrobat Reader, created with TCPDF on ZF2.
The file is created fine (except the size of the file, around 500kb), content is fine, but when trying to close the file, Acrobat prompts for saving the changes, though there is no changes. After saving the file and overwriting, the file size drops to around 40kb. So the file size is reduced over 10 times, but there is no visible change in the contents or otherwise.
Closest I got to any related answer was this thread here http://forums.planetpdf.com/save-file-prompt-when-closing_topic36.html
As I understand the issue is related to "The xref table is malformed", but my experience with pdf is not enough to understand the root of my problem. Sample file is available here https://dl.dropboxusercontent.com/u/29072870/test_pdf.pdf
Thanks in advance!
Only the first 7036 bytes of your file make up your actual pdf. Everything thereafter is some HTML code. Thus, you should check your pdf creation code, it seems to contain some HTML creation code (leftover from copy&paste? Added by the framework?), too.
The Adobe Reader shows these leading 7KB and eventually offers to save them as a repaired file encoded like the Reader prefers it (exploding those 7KB to your 40KB).
PS: I just saw that after the HTML code there additionally are about 80KB of null bytes.
It looks like you received a whole byte buffer 0x80000 (= 524288 decimally) bytes in size containing your PDF, some HTML, and some yet unused space.
problem actually not quite solved yet :)
the issue got much more strange now. on chrome everything works perfect, created pdf is solid and no additional data. whereas in firefox the output of the pdf is fine, saving the file works fine, opening the file with acrobat fine, closing produces same result in prompt for saving without any changes made. apparently there is still the portion of null bytes present in the end of the file. when using the "download as file" option in TCPDF output the result is correct, no additional data after EOF. only happens when pdf is output in the browser (firefox) and saved from there. could it be some firefox's issue? can one check the file for this kind of excess data and remove it somehow?
I have a script that gets the raw binary image data via url request. It then takes the data and puts it into mysql.
Pretty simple right? Well It's I'm inserting some 8,000 decent sized 600x400 jpegs and for some odd reason some of the images are getting cut off. Maybe the part of my script that iterates through each image it needs to get is going to fast?
When I do a straight request to the URL I can see all the raw image data, but on my end, the data is cut off some way down the line.
Any ides why?
Is something in the chain treating the binary data as a string, in particular a C style null-terminated string? That could cause it to get cut off at the first null byte ('\0').
Have you tried simply call your script that pulls the binary image, and dump it out. If you see the image correctly then its not pulling part, might be something to do with inserting.
Are you setting the headers correctly?
ie:
header('Content-Length: '.strlen($imagedata));
header('Content-Type: image/png');
...
A string datatype would definitely not be the optimum for storing images in a DB.
In fact I've seen several recommendations that the image should go in a folder somewhere in your filesystem and the DB contains only the address/file path.
This is a link to a page about inserting images.
It contains the suggestion about the filepath and that a blob datatype is better if the images must go in the database.
If it's a blob, then treating it as a string won't work.
If you make repeated requests to the same url, does the image eventually load?
If so that points to a networking issue. Large packet support is enabled in your kernal (assuming linux) which doesn't work correctly for a lot of windows clients. I've seen a similar issue with large(1+MB) javascript libraries served from a linux machine.
http://en.wikipedia.org/wiki/TCP_window_scale_option
http://support.microsoft.com/kb/314053