gzcompress() randomly inserting extra data? - php

I've been researching this all morning and have decided that as a last-ditch effort, maybe someone on Stack Overflow has a "been-there, done-that" type of answer for me.
Background Recently, I implemented compression on our (intranet-oriented) Apache (2.2) server using filters so that all text-based files are compressed (css, js, txt, html, etc.) via mod_deflate, mentioning nothing about php scripts. After plenty of research on how best to compress PHP output, I decided to use the gzcompress() flavor because the PHP documentation suggests that using zlib library and gzip (using the deflate algorithm, blah blah blah) is preferred over ob_gzipwhatever().
So I copied someone else's method like so:
<?php # start each page by enabling output buffering and disabling automatic flushes
ob_start();ob_implicit_flush(0);
(program logic)
print_gzipped_page();
function print_gzipped_page() {
if (headers_sent())
$encoding = false;
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'x-gzip') !== false )
$encoding = 'x-gzip';
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'gzip') !== false )
$encoding = 'gzip';
else
$encoding = false;
if($encoding){
$contents = ob_get_contents(); # get contents of buffer
ob_end_clean(); # turn off OB and flush buffer
$size = strlen($contents);
if ($size < 512) { # too small to be worth a compression
echo $contents;
exit();
} else {
header("Content-Encoding: $encoding");
header('Vary: Accept-Encoding');
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
exit();
}
} else {
ob_end_flush();
exit();
}
}
Pretty standard stuff, right?
Problem Between 10% and 33% of all our PHP page requests sent via Firefox go out fine and come back g-zipped, only Firefox displays the compressed ASCII in lieu of decompressing it. AND, the weirdest part, is that the content size sent back is always 30 or 31 bytes larger than the size of the page correctly rendered. As in, when the script is displayed properly, Firebug shows content size of 1044; when Firefox shows a huge screen of binary gibberish, Firebug shows a content size of 1074.
This happened to some of our users on legacy 32-bit Fedora 12s running Firefox 3.3s... Then it happened to a user with FF5, one with FF6, and some with the new 7.1! I've been meaning to upgrade them all to FF7.1, anyway, so I've been updating them as they have issues, but FF7.1 is still exhibiting the same behavior, just less frequently.
Diagnostics I've been installing Firebug on a variety of computers to watch the headers, and that's where I'm getting confused:
Normal, functioning page response headers:
HTTP/1.1 200 OK
Date: Fri, 21 Oct 2011 18:40:15 GMT
Server: Apache/2.2.15 (Fedora)
X-Powered-By: PHP/5.3.2
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Encoding: gzip
Vary: Accept-Encoding
Content-Length: 1045
Keep-Alive: timeout=10, max=75
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
(Notice that content-length is generated automatically)
Same page when broken:
HTTP/1.1 200 OK
(everything else identical)
Content-Length: 1075
The sent headers always include Accept-Encoding: gzip, deflate
Things I've tried to fix the behavior:
Explicitly declare content length with uncompressed and compressed lengths
Not use the substr() of $contents
Remove checksum at the end of $contents
I don't really want to use gzencode because my testing showed it to be significantly slower (9%) than gzcompress, presumably because it's generating extra checksums and whatnot that I (assumed) the web browsers don't need or use.
I cannot duplicate the behavior on my 64-bit Fedora 14 box running Firefox 7.1. Not once in my testing before rolling the compression code live did this happen to me, neither in Chrome nor Firefox. (Edit: Immediately after posting this, one of the windows I'd left open that sends meta refreshes every 30 seconds finally broke after ~60 refreshes in Firefox) Our handful of Windows XP boxes are behaving the same as the Fedora 12s. Searching through Firefox's Bugzilla kicked up one or two bug requests that were somewhat similar to this situation, but that was for versions pre-dating 3.3 and was with all gzipped content, whereas our Apache gzipped css and js files are being downloaded and displayed without error each time.
The fact that the content-length is coming back 30/31 bytes larger each time leads me to think that something is breaking inside my script/gzcompress() that is mangling something in the response that Firefox chokes on. Naturally, if you play with altering the echo'd gzip header, Firefox throws a "Content Encoding Error," so I'm really leaning towards the problem being internal to gzcompress().
Am I doomed? Do I have to scrap this implementation and use the not-preferred ob_start("ob_gzhandler") method?
I guess my "applies to more than one situation" question would be: Are there known bugs in the zlib compression library in PHP that does something funky when receiving very specific input?
Edit: Nuts. I readgzfile()'d one of the broken, non-compressed pages that Firefox downloaded and, lo and behold!, it echo'd everything back perfectly. =( That means this must be... Nope, I've got nothing.

okay 1st of all you don't seem to be setting the content length header, which will cause issues, instead, you are making the gzip content longer so that it matches the content length size that you were receiving in the 1st place. This is going to turn ugly. My suggestion is that you replace the lines
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
with
$compressed = gzcompress($contents, 9);
$compressed_length = strlen($compressed); /* contains no nulls i believe */
header("Content-length: $compressed_length");
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00", $compressed;
and see if it helps the situation.

Ding! Ding! Ding! After mulling over this problem all weekend, I finally stumbled across the answer after re-reading the PHP man pages for the umpteenth time... From the zlib PHP documentation, "Whether to transparently compress pages." Transparently! As in, nothing else is required to get PHP to compress its output once zlib.output_compression is set to "On". Yeah, embarrassing.
For reasons unknown, the code being called, explicitly, from the PHP script was compressing the already-compressed contents and the browser was simply unwrapping the one layer of compression and displaying the results. Curiously, the strlen() of the content didn't vary when output_compression was on or off, so the transparent compression must occur after the explicit compression, but it occasionally decided not to compress what was already compressed?
Regardless, everything is resolved by simply leaving PHP to its own devices. zlib doesn't need output buffering or anything to compress the output.
Hope this helps others struggling with the wonderful world of HTTP compression.

Related

IE not caching video (works in Chrome)

On an HTML page constructed using php + jsquery + javascript (e.g. index.php), a video tag has a source that is another php page, with a GET field specifying which video to load (e.g. "getfile.php?file=111").
Buttons switch which video is playing; e.g. javascript
var video = document.getElementById('flyover');
var source = video.getElementsByTagName('source')[0];
source.setAttribute('src', "getfile.php?file=222");
getfile.php emits HTTP headers, then fpassthru of file contents.
...
header('Content-Type: video/mp4');
header('Content-Disposition: attachment; filename='.basename($file->FileName));
header('Content-Transfer-Encoding: binary');
$seconds_to_keep = ...
header ("Expires: " . gmdate("D, d M Y H:i:s", time() + $seconds_to_keep) . " GMT");
header('Cache-Control: public, max-age=' . $seconds_to_keep);
header('Content-Length: ' . filesize($filename));
fpassthru($fp);
exit;
Fiddler proxy used to confirm headers:
# Result Protocol Host URL Body Caching Content-Type
47 200 HTTP ... /getfile.php?file=2639 10,113 public, max-age=31536000; Expires: Thu, 06 Aug 2015 20:20:30 GMT video/mp4
Test actions:
Load page
Wait for video #1 to finish playing (And Fiddler updates Caching info from "-1" to "max-age / Expires" details)
Push button for video #2
Wait for video #2 to finish playing (And Fiddler updates Caching info)
Push button for video #1
On Chrome, the result is that video #1 immediately starts playing (and buffering bar shows halfway loaded, which is the most I ever see at video start). Fiddler does NOT show a new "getfile" request to server.
On IE 11, there is a delay while video #1 buffers (and buffering bar shows zero loaded at video start). Fiddler DOES show a new "getfile" request to server.
IE's cache setting is "automatic". (Temporary Internet Files / Check for newer versions of stored pages = "Automatically"). Cache size is 250 mb, videos are ~ 6 mb each, and cache was emptied prior to start of testing.
Confirmed that URL is exactly the same (according to fiddler, and using alert pop-up in javascript).
Q: What else could affect IE's failure to cache these videos?
UPDATE
IMAGES, obtained via the same url, but with different query field fileid value, and different Content-Type header, ARE caching in IE: If quit browser, and restart browser, and go the the same page, Fiddler does not show any "/getfile.php?fileid=333" requests for those images. (It did show those requests the first time page was loaded after cache clear.)
The only change in php code executed (for images versus video) is a single if / else if statement, that controls what Content-Type header is emitted.
Perhaps it is IE 11's caching policy to not cache videos?
The logic does emit a Content-Length header with file size, and the client internet options cache (250 mbs) is much larger than the file size (6 mb), so it "should" be able to cache it. Disk space free is many GBs.
UPDATE #2
Restarting IE, after using Security tab to turn "Enable Protected Mode" off or on, does not change the above results.
Increasing disk space to the maximum (1024 MB) does not change the above results.
Setting IE's policy to "Check for newer versions of stored pages: Never" doesn't seem to "stick": when close Internet Options, then re-open it, the radio button has returned to "Automatically".
...
Repeating Chrome test after the IE tests confirms that caching is still working correctly within Chrome.
UPDATE #3
php code on server does NOT test for HTTP_IF_MODIFIED_SINCE; I'm not sending Last-Modified header. I was assuming maxage would be sufficient. It is possible that IE would be smarter about caching video files if Last-Modified was present. If you have any experience with video over slow server connections, and have succeeded using a specific set of headers, then an answer with the approach you used would be useful.
Give this a shot, from http://php.net/manual/en/function.header.php#85146:
$last_modified_time = filemtime($file);
$etag = md5_file($file);
header("Last-Modified: ".gmdate("D, d M Y H:i:s", $last_modified_time)." GMT");
header("Etag: $etag");
if (#strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE']) == $last_modified_time ||
trim($_SERVER['HTTP_IF_NONE_MATCH']) == $etag) {
header("HTTP/1.1 304 Not Modified");
exit;
}

TCPDF & mPDF error: Some data has already been output to browser, can't send PDF file

The Problem:
TCPDF & mPDF error: Some data has already been output to browser, can't send PDF file
I gave up on trying to fix the error with TCPDF and installed mPDF only to get the same error when attempting to render the document to the browser. I can save the document just fine and have it displayed in the browser upon retrieval.
Additionally, this error only presented itself after switching from my dev server to my host server. Works fine on DEV server (DEV server = WAMPSERVER, PROD server = Hostgator Linux).
Troubleshooting:
I've read the many volumes of other discussions around the internet regarding this problem and I can find no white space related issue. I have condensed the request to the following:
<?php
ob_start();
$html = "Hello World";
include("../mpdf.php");
$mpdf=new mPDF();
$mpdf->WriteHTML($html);
$mpdf->Output();
ob_end_clean();
?>
Tried the same concept with TCPDF using ob_clean() method before writeHtml. Same error in all cases (I can assure everyone this is no white space related issue - I even viewed the file in hex to make sure there were no odd characters being inserted by the editor).
Possible Clue:
I was finally able to get a clue as to what's going on when I moved the entire mPDF library and classes and folders to the public_html folder, rather than from inside my application folder (a symfony project). Under this scenario, when I pointed my browser to the example pages, it rendered just fine with no errors at all (and it was super fast btw). So, I know it works, and I know there is no white-space related issue, or any other related issue, regarding the code or installation (on the mPDF/TCPDF side of things). Which leads me to believe either symfony is inserting headers of some sort (which I tried removing using: clearHttpHeaders() ), or there is a PHP INI or CONFIG setting I am missing somewhere on the PROD server.
Does anyone have ANY idea of what's going on here??
Update: stream dump:
Request URL:http://www.example.com/mpdf
Request Method:GET
Status Code:200 OK
Request Headers
GET /mpdf HTTP/1.1
Host: www.example.com
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Cookie: __utma=44708724.1463191694.1383759419.1383759419.1383765151.2; __utmz=44708724.1383759419.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=9c7c802200b9d8eefe718447755add5f; __utma=1.813547483.1383767260.1385127878.1385130071.38; __utmb=1.7.10.1385130071; __utmc=1; __utmz=1.1383767260.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Response Headers
Cache-Control:no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Connection:Keep-Alive
Content-Type:text/html
Date:Fri, 22 Nov 2013 14:59:52 GMT
Expires:Thu, 19 Nov 1981 08:52:00 GMT
Keep-Alive:timeout=5, max=75
Pragma:no-cache
Server:Apache
Transfer-Encoding:chunked
Nothing is jumping out at me... any other thoughts?
Most probably it's BOM marker, use your IDE to remove it, other hot fix can be:
<?php
$html = "Hello World";
include("../mpdf.php");
ob_clean(); // cleaning the buffer before Output()
$mpdf=new mPDF();
$mpdf->WriteHTML($html);
$mpdf->Output();
?>
I have got the same error.
Solve this using op_start(); and ob_end_clean();
PHP is an interpreted language thus each statement is executed one after another, therefore PHP tends to send HTML to browsers in chunks thus reducing performance. Using output buffering the generated HTML gets stored in a buffer or a string variable and is sent to the buffer to render after the execution of the last statement in the PHP script.
But Output Buffering is not enabled by default. In order to enable the Output Buffering one must use the ob_start() function before any echoing any HTML content in a script.
[reference credit][1]
[PHP | ob_start() Function][2]
public function gen_pdf($html, $user_id, $paper = 'A4') {
ob_start();//Enables Output Buffering
$mpdf = new mPDF('UTF-8', $paper, '', '', 15, 15, 30, 20, 15, 5);
$mpdf->mirrorMargins = 1; // Use different Odd/Even headers and footers and mirror margins
$header = '';
$footer = '';
$mpdf->SetHTMLHeader($header);
$mpdf->SetHTMLFooter($footer);
$mpdf->SetWatermarkText('Watermark', 0.1);
$mpdf->showWatermarkText = true;
$mpdf->WriteHTML($html);
$fileName = date('Y_m_d_H_i_s');
ob_end_clean();//End Output Buffering
$mpdf->Output('Example_' . $fileName . '.pdf', 'I');
}
So that it will clear all buffered output before processing mPDF.
Best Luck...
[1]: https://www.geeksforgeeks.org/php-ob_start-function/
[2]: https://www.php.net/manual/en/function.ob-start.php
It could be some warning issued from PHP before the pdf->output. The warning text is sent to the client's browser and thus the file cannot be sent.
If your warning level is not the same for DEV and PROD, that could explain the difference of behavior.
In my case, with TCPDF, I had many warnings such as "date() it is not safe to rely on the system's timezone settings...", then the error "Some data has already been output to browser, can't send PDF".
Adding the function date_default_timezone_set() in my php source code solved the warnings and the error.
I have the same issue, and add this line before $pdf->output():
error_reporting(E_ALL);
An then I found that I have BOM on some files.
And I see a Warning message sent to the browser.
Best Luck !!
Regards
May be it occurs because of in your result of HTML code have some error to output to create the TCPDF ...
OR
If above is not work try set the Charset as UTF-8 in class file of TCPDF may it solve your issue...
Because this type of error was happening in my project before one week ago ...
Remove any file you would have included at the start of the page. In my case it was a file that was connecting with database. It worked for me. (Tip from #Nicolas400 )
Try using ob_clean(); before include("../mpdf.php");.
I have got the same error.
Data has already been sent to output, unable to output PDF file
This means before creating pdf with mPDF some data is stored in the buffer which is sended to the browser. Therefore it is unable to create PDF.
Just do this..
Add this below php built-in function at the first line of your page were you are preparing data for pdf.
op_start();
And add this below php built-in function before mPDF code (before where you are calling mpdf)
ob_end_flush();
require_once __DIR__ . '/vendor/autoload.php';
$mpdf = new \Mpdf\Mpdf();
$mpdf->WriteHTML($html);
$mpdf->Output();
So that it will clear all buffer output before processing mPDF.

Ogg audio served from PHP-file unseekable on Chrome

I'm developing a basic audio player using HTML5 audio element. All served files are .ogg coming from another system as a binary string through an API and printed out by a simple PHP handler (which includes some basic security checking) and there's really nothing I can do about it.
However, while everything works nicely on Firefox, Chrome has some issues regarding seeking, skipping or replaying (simply put: it can't be done). Getting audio duration from the audio element returns Infinity, and while currentTime returns correct value, it can't be manipulated.
I started fiddling with headers and response codes and nothing seems to work properly. Using response code 206 for partial content or declaring Accept-Ranges in header causes audio element to go disabled as it does when the source file doesn't exist.
Searching for proper headers didn't yield much results, since it always was about partial content and X-Content-Duration (which is a PITA to calculate because of VBR), which did absolutely nothing on Chrome.
Here are the relevant headers which work just fine on FF. Did I make a major mistake on something, or is there just some issue with Chrome?
HTTP/1.1 200 OK (changing this to 206 makes the file unplayable in Chrome)
Cache-Control: public, no-store
Content-Disposition: attachment; filename="redacted.ogg"
Content-Length: 123456
X-Content-Duration: 12 (this does nothing on Chrome, works on FF)
Content-Range: bytes 0-123455/123456
Accept-Ranges: bytes (...as does including this)
Content-Transfer-Encoding: binary
Content-Type: audio/ogg
Expires: 0
Pragma: public
Edit: some kind of behaviour on Opera, probably on all Webkit browsers. Also doesn't work in fread/file_get_contents type of situations.
Alright, dumb me, it really was due the partial content. Chrome didn't really like the idea of taking the whole file at once (and caching it like FF), so to enable seeking and whatnot it needed Accept-Ranges: bytes, a bit of incoming header parsing to get the requested byte range and couple of lines which return that aforementioned byte range with correct partial content status (206), not just "bytes 0-(filesize-1)/filesize" (whole file), even if the first request asks for it (bytes=0-).
Hope this helps someone else as well.

How to return an image to an https request with PHP

After many hours,
I nailed down my problem concerning downloading an image with an https request.
When I access an image with the hard path (https://mysite/tmp/file.jpg), apache
returns it with success and I can view it in the browser, without any extra manipulation.
When I try to access it with another path
(https://mysite/files/file.jpg), in order to control its access with php, I get
a response from my php code, but I cannot view the image in the browser.
VirtualHost defined; mysite: set to /var/www/mysite
$app['controllers']->requireHttps();
Description of the environment:
mysite/tmp/file.jpg
mysite/files/.htaccess
//... no files; handled by Silex router. here is the .htaccess:
---
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^ ../web/index.php [L]
</IfModule>
---
https:// mysite/tmp/file.jpg, served with https:200 and viewed in browser; ok
https:// mysite/files/file.jpg served with https:200 but not viewed in browser; ?
Here are the 3 methods tried:
Method 1: Silex sendFile() direct method >
$app->get('/files/{onefile}', function ($onefile) use ($app) {
// Validate authorization; if ok, then
return $app->sendFile('/var/www/mysite/tmp/' . $onefile);
});
Method 2: Silex Streaming >
$app->get('/files/{onefile}', function ($onefile) use ($app) {
// Validate authorization; if ok, then
$stream = function () use ($file) {
readfile($file);
};
return $app->stream($stream, 200, array('Content-Type' => 'image/jpeg'));
Method 3: Symfony2 style >
$app->get('/files/{onefile}', function ($onefile) use ($app) {
// Validate authorization; if ok, then
$exactFile="/var/www/mysite/tmp/" . $onefile;
$response = new StreamedResponse();
$response->setCallback(function () use ($exactFile) {
$fp = fopen($exactFile, 'rb');
fpassthru($fp);
});
$response->headers->set('Content-Type', 'image/jpg');
$response->headers->set('Content-length', filesize($exactFile));
$response->headers->set('Connection', 'Keep-Alive');
$response->headers->set('Accept-Ranges','bytes');
$response->send();
This is what Chrome presents:
With this Chrome image this is the Http (Https or not, same result) request
Request URL:https://mysite/files/tmpphp0XkXn9.jpg
Request Method:GET
Status Code:200 OK
Request Headers:
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Cookie:XDEBUG_SESSION=netbeans-xdebug; _MYCOOKIE=u1k1vafhaqik2d4jknko3c94j1
Host:mysite
Pragma:no-cache
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36
Response Headers:
Accept-Ranges:bytes
Cache-Control:no-cache
Connection:Keep-Alive, Keep-Alive
Content-Length:39497
Content-Type:image/jpg
Date:Thu, 13 Jun 2013 13:44:55 GMT
Keep-Alive:timeout=15, max=99
Server:Apache/2.2.16 (Debian)
X-Powered-By:PHP/5.3.3-7+squeeze15
Other tests made to eliminate possible undesired behavior:
I checked the BOM and made sure the php code responding to the request is valid
and do not have undesired byte-order marks (BOMs) using Emacs (set-buffer-file-coding-system utf-8). Furthermore, to avoid unknown BOM tagged files, I executed the following command in the mysite directory (grep -rl $'\xEF\xBB\xBF' .). Nothing abnormal appeared.
UPDATE:
Looking at the files (image) received by the browser (save as on each image), this is what the tool (Hex Friend) help me to find, but I still do not understand why:
Comparing the two files
(one with success: mysite/tmp/file.jpg ; served directly by Apache)
(one with NO success: mysite/files/file.jpg; served by a PHP script).
At the beginning of the binary file, I get this difference:
At the end of the binary file, I get this difference:
Question:
How can I return an image (in stream or some other technique) with a php code, and view it in the browser ? All php code methods return the same output, I suspect an environment problem. Is there an environment setting that could produce the error I am experimenting ?
I am not happy with this solution (A PATCH), but the problem is fixed, by adding the following call:
$app->get('/files/{onefile}', function ($onefile) use ($app) {
ob_end_clean(); // ob_clean() gave me the same result.
...
}
Ok, it is fixed, but, can somebody explain to me how to fix it more intelligently ?
UPDATE
What happened ?:
This is what my interpretation:
Extra newlines remains in the orignal php raw files to a unintentionally position! It turns out that when you have a php file
<?php
...
?>
where you left some newlines (or spaces) after the ?>, those newlines will add up in the output buffer. Then trying to stream a file to the output will take these add up newlines and put them where they belong: in the header stream, or at the footer stream. Having a fixed size for the stream (equal to the image size) will take specially the header extra characters and shift accordingly the bytes outputted to the browser. I suspected that I add exactly 5 characters (0A 0A 20 0A 0A) corresponding to (linefeed linefeed space linefeed linefeed) discovered in the received image from the browser. Now the browser do not recognize the image structure, being shift from the offset 0, of 5 non logical character for the binary image. Therefore, the browser can only show a broken image icon.
Also look at: Another helping SO fix
Advise to PHP framework developer:
If you provide a sendFile() equivalent method, maybe you should thrown an Exception, when ob_get_contents() does not return an empty buffer, just before streaming to the output !
For now:
This small linux batch file can at least let you find where your code should be cleaned. After using it, I was able to remove the ob_end_clean()... after analyzing the output of this small script. It tells you suspicious php files that might contain extra space or newline at the end of your php files. Just execute and manually fix files:
#!/bin/bash
for phpfiles in $(ls -1R *.php); do
hexdump -e '1/1 "%.2x"' $phpfiles | tail -1 >endofphpfile;
if [ `cat endofphpfile` = "3f3e" ]; then
echo "OK.................File $phpfiles"
else
thisout=`cat endofphpfile`
echo "File $phpfiles: Suspucious. Check ($thisout) at the EOF; it should end with '?>' (in hex 3f3e) "
fi
done
It can surely be improved, but that should, at least, help anybody !
don't use the closing ?> ... it's against symfony coding standards.
Against PSR-2 to be more concrete.
All PHP files MUST end with a single blank line.
The closing ?> tag MUST be omitted from files containing only PHP.
The reason for that is partly what you experienced.
This behavior is sometimes caused by the BOM aswell.
check none of your files contains the byte-order-mark!

PHPExcel: I need to create two workbooks on one submission [duplicate]

Use case: user clicks the link on a webpage - boom! load of files sitting in his folder.
I tried to pack files using multipart/mixed message, but it seems to work only for Firefox
This is how my response looks like:
HTTP/1.0 200 OK
Connection: close
Date: Wed, 24 Jun 2009 23:41:40 GMT
Content-Type: multipart/mixed;boundary=AMZ90RFX875LKMFasdf09DDFF3
Client-Date: Wed, 24 Jun 2009 23:41:40 GMT
Client-Peer: 127.0.0.1:3000
Client-Response-Num: 1
MIME-Version: 1.0
Status: 200
--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="001.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="002.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
--AMZ90RFX875LKMFasdf09DDFF3--
Thank you
P.S. No, zipping files is not an option
Zipping is the only option that will have consistent result on all browsers. If it's not an option because you don't know zips can be generated dynamically, well, they can. If it's not an option because you have a grudge against zip files, well..
MIME/multipart is for email messages and/or POST transmission to the HTTP server. It was never intended to be received and parsed on the client side of a HTTP transaction. Some browsers do implement it, some others don't.
As another alternative, you could have a JavaScript script opening windows downloading the individual files. Or a Java Applet (requires Java Runtimes on the machines, if it's an enterprise application, that shouldn't be a problem [as the NetAdmin can deploy it on the workstations]) that would download the files in a directory of the user's choice.
Remember doing this >10 years ago in the netscape 4 days. It used boundaries like what your doing and didn't work at all with other browsers at that time.
While it does not answer your question HTTP 1.1 supports request pipelining so that at least the same TCP connection can be reused to download multiple images.
You can use base64 encoding to embed an (very small) image into a HTML document, however from a browser/server standpoint, you're technically still sending only 1 document. Maybe this is what you intend to do?
Embedd Images into HTML using Base64
EDIT: i just realized that most methods i found in my google search only support firefox, and not iE.
You could make a json with multiple data urls.
Eg:
{
"stamp.png": "data:image/png;base64,...",
"document.pdf": "data:application/pdf;base64,..."
}
(extending trinalbadger587's answer)
You could return an html with multiple clickable, downloadable, inplace data links:
<html>
<body>
<a download="yourCoolFilename.png" href="data:image/png;base64,...">PNG</a>
<a download="theFileGetsSavedWithThisName.pdf" href="data:application/pdf;base64,...">PDF</a>
</body>
</html>

Categories