Corrupted files when downloaded via encrypted link - php

Our products are eBooks, delivered in both .epub and .mobi formats via the http://www.tipsandtricks-hq.com/ eStore Plug-in for WordPress which generates encrypted download links.
Problem began suddenly about 5 days ago and had previously worked fine for about three months
all .epub files get corrupted when customers download their purchase via the download links and Adobe Digital Editions gives an error when trying to import the ePub to the Library
all .mobi files get corrupted when downloaded and then loaded onto Kindle, Kindle gives similar error
we tried turning off the Google URL shortener, error
we have tested the links with IE, Chrome and Firefox, error
we tested without the encrypted links, by downloading the files via a direct link in the browser and they work fine, no error
What we have learned:
we have tested using FTP to download in both ASCII and Binary modes ... with ASCII we get the same error as with using the encrypted download links
files transfered with FTP are the same size after using ASCII and Binary, but running a hash check shows that the contents are different
we are using FileZilla to test via FTP on both PC and Mac, however the error only occurs on the PC
SO, we are assuming that this problem has to do with the file transfer type and PCs
in /home/foo/bar/wp-content/plugins/wp-cart-for-digital-products/download.php we see header("Content-Transfer-Encoding: binary"); so we assume that Binary is being forced when using the encrypted links
Could there possibly be some characters in the encrypted link string that are forcing ASCII? Here is an example of the encrypted link:
https://fu.com/bar/download.php?file=LRtro6WQMN12ip%2BEcL0TYS8sMZmSKOlkRedVCZyfACsqSllzCAjDp%2FZJyfQ2oq0ZP6vg1EMrR%2FOFC4B3wGDHl3N0u0sulcBhIfkOJ0C0UQh6
And here is a look at the http header:
Status: HTTP/1.1 200 OK
Date: Wed, 27 Feb 2013 14:55:47 GMT
Server: Apache
X-Powered-By: PHP/5.3.17
Set-Cookie: PHPSESSID=7d61c9dd6ecbd321bea8cffg4a25d5e8; path=/
Expires: 0
Cache-Control: public
Pragma: public
X-CF-Powered-By: WP 1.3.9
Content-Description: File Transfer
Content-Disposition: attachment; filename="Some-File-Name-Which-Was-Replaced.epub"
Content-Transfer-Encoding: binary
Content-Length: 5088032
Connection: close
Content-Type: application/epub+zip
What else could possibly be causing this? Could our server be forcing ASCII in .htaccess or apache config settings?
Thank you very much

One of our programmers was using the wrong character encoding in his editor. We need to treat PHP files with the UTF-8(BOM off) or blank spaces or newlines can interefere somehow in the file integrity:
UTF-8 BOM signature in PHP files
Offending line of code in WordPress' functions.php: if ($bom != b"\xEF\xBB\xBF")
Our culprit HTML Character codes ï°¿, which we decoded from the HEX values: EF BB BF
That was the junk data appearing in the beggining of the corrupted .epub files.
functions.php was always outputting this junk data each time it ran in WordPress.
So glad we finally got to the root of the problem, was driving us mad! Ha haa!!!
Peace & Happy Coding to You

Related

Apache and Content-Range

I am trying to implement support for Content-Range in PHP-generated files. When a browser sends Range request my script gives correct bytes and it works well.
But while testing how Content-Range looks when downloading a PDF from Apache server I realized the first request from a web browser to my server does not contain Range header but somehow server still doesn't return full file and only 32 kB.
On this screenshot you can see that Firefox sends 5 requests to Apache for my_pdf.pdf and Apache each time responds with 32-192 kB. The whole PDF is 28 MB. Requests 2-5 do contain Range request. But the first request- highlighted does not. You can see on the right that Content-Length is 28 MB but that Apache returned only 32 kB.
So my question is- how did Apache know to return only 32 kB and not the whole 28 MB PDF file?
So my question is- how did Apache know to return only 32 kB and not the whole 28 MB PDF file?
It didn't. If you look at the Content-Length header in the response, it shows the full file size of 29.3 million bytes.
The client probably closed the connection without reading the entire response.
Answer posted by #duskwuff is correct- Firefox terminates the transfer of the first requests once it gets enough to process the PDF.
Below is just a few details I discovered.
Firefox will terminate if your scripts returns these headers:
Accept-Ranges: bytes
Content-Length: 29293315
You can (but don't have to) also return this header:
header("Content-Range: bytes 0-29293314/29293315");
However by default Apache tries to compress whatever PHP returns and then adds this header:
Transfer-Encoding: chunked
And when Firefox (and Chrome) see this they won't close the connection. So I just disabled Apache compression and everything works. Now Firefox just does a few requests, get bits of PDF instead of the whole file and renders first page just fine (because it didn't need whole PDF to render just the first page).

Downloading Blobs from TrueVault with spaces in the filename

I'm using the TrueVault REST API to upload/download Blobs per the documentation at https://docs.truevault.com/Files
To download an existing Blob, I'm passing the Blob URL directly to the client's web browser (Firefox) via a PHP Header redirect. The client is able to download the Blob content from TrueVault without issue, but when using Firefox I've noticed that if the Blob being downloaded has spaces in the filename, the filename is truncated when downloading.
For instance, if I upload a Blob to TrueVault with filename 'Test File.txt', it gets downloaded to Firefox as just 'Test'. I've seen this behavior in other PHP apps and the fix has been to put quotes around the filename in the Response Headers, as seen here.
I've traced the Response Headers from TrueVault when downloading and I can see where the filename is being passed to the client without any quotes around the name. Since the client is downloading the Blob directly from TrueVault, there's nothing I can do in my code to affect this behavior. Anyone else seeing this behavior? Any suggestions?
Strict-Transport-Security: max-age=31536000
Server: gunicorn/18.0
Date: Wed, 29 Apr 2015 14:40:28 GMT
Content-Type: application/zip
Content-Length: 11377
Content-Disposition: attachment; filename=Test file with Spaces.docx
Connection: keep-alive
Cache-Control: no-cache
This issue will be addressed by 4/30/2015. Thanks for bringing this to our attention.

Content-Length header is not getting in Android 2.3 above browser

I am having an hybrib application in that in that a simple php page will open which contents some link of files, and from my android wrapper i have implemented the download functionality of file.
So for user convenience i am showing the length and progress of download while the file is downloading for that my application server has set a content-length header to pass the size on device, but the problem I am facing is surprising.
The file length is working fine in Android 2.2. I am getting the content header correctlt but in Android 2.3 above I am getting the content length for smaller files but for the larger file I am not even getting the Header Field.
con.getHeaderField("content-length");
returning me null in case of Android 2.3 above.
So is there any limitation of size for the User Agent above 2.3 because if it is working fine in 2.2 means there is no problem at server end it is the problem only on device user agent.
Update
I have tried it with different size of files and it is working fine till 60KB in Android 2.3 above as well.
It sounds like the client may be chunking the file. Check for the presence of the following header:
Transfer-Encoding: chunked
If that exists, the request is chunked and you will not get a Content-Length header.
See http://en.wikipedia.org/wiki/Chunked_transfer_encoding for more details.

How can I strip these (â?²s) type of characters with PHP?

I am developing an application using CodeIgniter/MySQL. Last night I stored a title into my database
"HTML5′s placeholder Attribute".
After storing when I retrieve from database for display it shows some strange characters like this:
"HTML5â?²s placeholder Attribute".
How I can avoid these strange characters?
You probably just need to make sure both the database table you are storing data is set to store in UTF-8 as well as the html page that displays the data should also be explicitly set to UTF-8 encoding.
Your example application URL (seekphp.com/look/phpquery-jquery-port-to-php/1758) shows (via firebug for firefox):
Response Headers
Date Sat, 14 Jan 2012 06:26:31 GMT
Server Apache/2.2.19 (Unix) mod_ssl/2.2.19 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By PHP/5.2.17
Keep-Alive timeout=5, max=100
Connection Keep-Alive
Transfer-Encoding chunked
Content-Type text/html
but a properly UTF-8 encoded output will show the last line to be
Content-Type text/html; charset=UTF-8
You can encode your HTML through outputting a meta tag in the document HEAD section:
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
or you can have PHP set this in a header:
header ('Content-type: text/html; charset=utf-8');
I had a similar problem when I was copying data from a text editor and pasting it in phpmyadmin. If you're doing this then your text editor might be using different encoding. I suggest you copy the data into a simple text editor like a notepad and manually replace the apostrophes. Manually replace ‘ with ' and it should work fine.
i have an issue related with the utf-8 charset. I've been all around the web (well, not entirely) but for quite awhile now and the best advice was and is to set the header charset to "UTF-8".
However, I was developing my web application locally on my machine using xampp (and sometimes wamp so as to get a distinction of the two when it came to debugging my code). Everything was working great =). But as soon as i uploaded it online, the result was not all that jazzy (the kind of errors you would get if you had set the headers to a different charset like "iso-8859-1").
Every header in my code has UTF-8 as the default charset, but i still got the same "hieroglyphic thingies". Then you guys gave me the idea that the issue isn't my code but the php.ini that was running it.. Turns out my local machine was running php 5.5 and the cpanel where i had uploaded my web application was running native php 5.3.
Well, when i changed the version of php that my cpanel was set by default to PHP 5.5, believe you me guys =) it worked like a charm just like as if i was right there at the localhost of machine.
NOTE: Please, if you got the same problem as i did, just make sure your PHP is 5.5 version.. I'm posting this coz i feel you guys. Cheers!

Download PDF in Chrome from TCPDF

When a downloading PDF file in Chrome 12.0.742.91 (either as an attachment and inline), the download is interrupted (at the beginning it shows 125KB, but later 127518/0 B and then it stops entirely).
The file download works correctly in Firefox and IE. Headers are correct, apache returns 200 OK.
Previously, everything was ok, probably until a Chrome update a few days ago.
Just for further reference: The problem is related to the gzip handling. Disabling transparent gzip compression solved it.
Check your Content-Length header. It seems to be returning a size smaller than the file itself. I suspect Chrome is interrupting the download as its receiving more bytes than it should. It would be easy to put a test case in place for this however.
I had problems when the name of the file served contained weird characters like some accents (á é í ó ú) or this degree symbol: º.

Categories