PHPExcel: I need to create two workbooks on one submission [duplicate] - php

Use case: user clicks the link on a webpage - boom! load of files sitting in his folder.
I tried to pack files using multipart/mixed message, but it seems to work only for Firefox
This is how my response looks like:
HTTP/1.0 200 OK
Connection: close
Date: Wed, 24 Jun 2009 23:41:40 GMT
Content-Type: multipart/mixed;boundary=AMZ90RFX875LKMFasdf09DDFF3
Client-Date: Wed, 24 Jun 2009 23:41:40 GMT
Client-Peer: 127.0.0.1:3000
Client-Response-Num: 1
MIME-Version: 1.0
Status: 200
--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="001.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="002.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
--AMZ90RFX875LKMFasdf09DDFF3--
Thank you
P.S. No, zipping files is not an option

Zipping is the only option that will have consistent result on all browsers. If it's not an option because you don't know zips can be generated dynamically, well, they can. If it's not an option because you have a grudge against zip files, well..
MIME/multipart is for email messages and/or POST transmission to the HTTP server. It was never intended to be received and parsed on the client side of a HTTP transaction. Some browsers do implement it, some others don't.
As another alternative, you could have a JavaScript script opening windows downloading the individual files. Or a Java Applet (requires Java Runtimes on the machines, if it's an enterprise application, that shouldn't be a problem [as the NetAdmin can deploy it on the workstations]) that would download the files in a directory of the user's choice.

Remember doing this >10 years ago in the netscape 4 days. It used boundaries like what your doing and didn't work at all with other browsers at that time.
While it does not answer your question HTTP 1.1 supports request pipelining so that at least the same TCP connection can be reused to download multiple images.

You can use base64 encoding to embed an (very small) image into a HTML document, however from a browser/server standpoint, you're technically still sending only 1 document. Maybe this is what you intend to do?
Embedd Images into HTML using Base64
EDIT: i just realized that most methods i found in my google search only support firefox, and not iE.

You could make a json with multiple data urls.
Eg:
{
"stamp.png": "data:image/png;base64,...",
"document.pdf": "data:application/pdf;base64,..."
}

(extending trinalbadger587's answer)
You could return an html with multiple clickable, downloadable, inplace data links:
<html>
<body>
<a download="yourCoolFilename.png" href="data:image/png;base64,...">PNG</a>
<a download="theFileGetsSavedWithThisName.pdf" href="data:application/pdf;base64,...">PDF</a>
</body>
</html>

Related

Using Content-ID and cid for embedded email images in Thunderbird

I'm generating emails in a PHP application which attach multiple files to an HTML email. Some of the files are Excel spreadsheets, some of the files are company logos which need to be embedded in the HTML so they load by default using Content-ID and cid identifiers to refer to the attached images.
As far as I can see, my syntax is correct, but the images don't ever load inline (they are attached successfully, however).
From: email#example.com
Reply-To: email#example.com
MIME-Version: 1.0
Content-type: multipart/mixed;boundary="d0f4ad49cc20d19bf96d4adf9322d567"
Message-Id: <20150421165500.0A5488021B#server>
Date: Tue, 21 Apr 2015 12:54:59 -0400 (EDT)
--d0f4ad49cc20d19bf96d4adf9322d567
Content-type: text/html; charset=utf-8
Content-transfer-encoding: 8bit
<html>
Html message goes here, followed by email.<br/>
<img src="cid:mylogo" />
</html>
--d0f4ad49cc20d19bf96d4adf9322d567
Content-type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet; name=excelsheet.xlsx
Content-Description: excelsheet.xlsx
Content-Disposition: attachment;
filename="excelsheet.xlsx"; size=24712;
Content-transfer-encoding:base64
[base64 encoded string goes here.]
--b19e863e2cf66b40db1d138b7009010c
Content-Type: image/jpeg;
name="mylogo.jpg"
Content-transfer-encoding:base64
Content-ID: <mylogo>
Content-Disposition: inline;
filename="mylogo.jpg"; size=7579;
[base64 encoded string goes here.]
--b19e863e2cf66b40db1d138b7009010c--
Can anybody see an obvious reason why the image won't embed as expected?
EDIT
Note this behaviour isn't general to all email clients. So far only noted in Thunderbird.
I noticed two issues:
The MIME-boundary is inconsistent. For the first attachment it's d0f4ad49cc20d19bf96d4adf9322d567 and then b19e863e2cf66b40db1d138b7009010c is used. Thus, technically the second attachment is "part" of the first attachment.
If you replace all b19e863e2cf66b40db1d138b7009010c by d0f4ad49cc20d19bf96d4adf9322d567 Thunderbird correctly identifies the image attachment.
Use multipart/related instead of multipart/mixed. (see RFC2387)
A multipart/related is used to indicate that each message part is a component of an aggregate whole. It is for compound objects consisting of several inter-related components - proper display cannot be achieved by individually displaying the constituent parts. The message consists of a root part (by default, the first) which reference other parts inline, which may in turn reference other parts. Message parts are commonly referenced by the "Content-ID" part header. (see Wikipedia entry for MIME multipart/related)

Ogg audio served from PHP-file unseekable on Chrome

I'm developing a basic audio player using HTML5 audio element. All served files are .ogg coming from another system as a binary string through an API and printed out by a simple PHP handler (which includes some basic security checking) and there's really nothing I can do about it.
However, while everything works nicely on Firefox, Chrome has some issues regarding seeking, skipping or replaying (simply put: it can't be done). Getting audio duration from the audio element returns Infinity, and while currentTime returns correct value, it can't be manipulated.
I started fiddling with headers and response codes and nothing seems to work properly. Using response code 206 for partial content or declaring Accept-Ranges in header causes audio element to go disabled as it does when the source file doesn't exist.
Searching for proper headers didn't yield much results, since it always was about partial content and X-Content-Duration (which is a PITA to calculate because of VBR), which did absolutely nothing on Chrome.
Here are the relevant headers which work just fine on FF. Did I make a major mistake on something, or is there just some issue with Chrome?
HTTP/1.1 200 OK (changing this to 206 makes the file unplayable in Chrome)
Cache-Control: public, no-store
Content-Disposition: attachment; filename="redacted.ogg"
Content-Length: 123456
X-Content-Duration: 12 (this does nothing on Chrome, works on FF)
Content-Range: bytes 0-123455/123456
Accept-Ranges: bytes (...as does including this)
Content-Transfer-Encoding: binary
Content-Type: audio/ogg
Expires: 0
Pragma: public
Edit: some kind of behaviour on Opera, probably on all Webkit browsers. Also doesn't work in fread/file_get_contents type of situations.
Alright, dumb me, it really was due the partial content. Chrome didn't really like the idea of taking the whole file at once (and caching it like FF), so to enable seeking and whatnot it needed Accept-Ranges: bytes, a bit of incoming header parsing to get the requested byte range and couple of lines which return that aforementioned byte range with correct partial content status (206), not just "bytes 0-(filesize-1)/filesize" (whole file), even if the first request asks for it (bytes=0-).
Hope this helps someone else as well.

PHP function file_get_contents retrieves encoded information - header: "'Content-Type: image/png'"

Hi everyone I am having a bit of a problem related to the php function file_get_contents.
I used it many times and no problems but when I am trying to get some information from a particular site the information I get when I echo the result is pretty much encoded (Example: ���IHDR�).
I looked at the header of the site and instead of saying
Content-Type: text/html;
it is saying
Content-Type: image/png
How do I decode that so I can get the source code (html) of the site? The web-site when I go to it in a browser, it looks like a regular web-site: text, images nothing out of ordinary.
When I look at the source code nothing out of ordinary there either. But when I do a file_get_contents I do not get the source code like I used to get on other websites.
Any ideas?
Note: I had the same problem in the past it was encoded in GZIP and I was able to find a function to decode it but with Content-Type: image/png I do not know how to proceed.
Why not, create a basic test script to the output the returned image, tho I suspect its an image saying:
Stop scrapping my site!!! Yada Yada
header('Content-Type: image/png');
echo file_get_contents('http://example.com');
The Content-Type header tells you which content-type the requested file has, in your case it is a PNG image (image/png).
You find a description of many content-types (written in a so called mime-type specification) online, this is a nice list: fileformat.info MIME types.
As you might can imagine, it's not possible to display an image in text-form (at least not before converting it to ascii art) so you will not have much luck this time.
Check the URI if it is really the one you wanted to obtain.

gzcompress() randomly inserting extra data?

I've been researching this all morning and have decided that as a last-ditch effort, maybe someone on Stack Overflow has a "been-there, done-that" type of answer for me.
Background Recently, I implemented compression on our (intranet-oriented) Apache (2.2) server using filters so that all text-based files are compressed (css, js, txt, html, etc.) via mod_deflate, mentioning nothing about php scripts. After plenty of research on how best to compress PHP output, I decided to use the gzcompress() flavor because the PHP documentation suggests that using zlib library and gzip (using the deflate algorithm, blah blah blah) is preferred over ob_gzipwhatever().
So I copied someone else's method like so:
<?php # start each page by enabling output buffering and disabling automatic flushes
ob_start();ob_implicit_flush(0);
(program logic)
print_gzipped_page();
function print_gzipped_page() {
if (headers_sent())
$encoding = false;
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'x-gzip') !== false )
$encoding = 'x-gzip';
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'gzip') !== false )
$encoding = 'gzip';
else
$encoding = false;
if($encoding){
$contents = ob_get_contents(); # get contents of buffer
ob_end_clean(); # turn off OB and flush buffer
$size = strlen($contents);
if ($size < 512) { # too small to be worth a compression
echo $contents;
exit();
} else {
header("Content-Encoding: $encoding");
header('Vary: Accept-Encoding');
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
exit();
}
} else {
ob_end_flush();
exit();
}
}
Pretty standard stuff, right?
Problem Between 10% and 33% of all our PHP page requests sent via Firefox go out fine and come back g-zipped, only Firefox displays the compressed ASCII in lieu of decompressing it. AND, the weirdest part, is that the content size sent back is always 30 or 31 bytes larger than the size of the page correctly rendered. As in, when the script is displayed properly, Firebug shows content size of 1044; when Firefox shows a huge screen of binary gibberish, Firebug shows a content size of 1074.
This happened to some of our users on legacy 32-bit Fedora 12s running Firefox 3.3s... Then it happened to a user with FF5, one with FF6, and some with the new 7.1! I've been meaning to upgrade them all to FF7.1, anyway, so I've been updating them as they have issues, but FF7.1 is still exhibiting the same behavior, just less frequently.
Diagnostics I've been installing Firebug on a variety of computers to watch the headers, and that's where I'm getting confused:
Normal, functioning page response headers:
HTTP/1.1 200 OK
Date: Fri, 21 Oct 2011 18:40:15 GMT
Server: Apache/2.2.15 (Fedora)
X-Powered-By: PHP/5.3.2
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Encoding: gzip
Vary: Accept-Encoding
Content-Length: 1045
Keep-Alive: timeout=10, max=75
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
(Notice that content-length is generated automatically)
Same page when broken:
HTTP/1.1 200 OK
(everything else identical)
Content-Length: 1075
The sent headers always include Accept-Encoding: gzip, deflate
Things I've tried to fix the behavior:
Explicitly declare content length with uncompressed and compressed lengths
Not use the substr() of $contents
Remove checksum at the end of $contents
I don't really want to use gzencode because my testing showed it to be significantly slower (9%) than gzcompress, presumably because it's generating extra checksums and whatnot that I (assumed) the web browsers don't need or use.
I cannot duplicate the behavior on my 64-bit Fedora 14 box running Firefox 7.1. Not once in my testing before rolling the compression code live did this happen to me, neither in Chrome nor Firefox. (Edit: Immediately after posting this, one of the windows I'd left open that sends meta refreshes every 30 seconds finally broke after ~60 refreshes in Firefox) Our handful of Windows XP boxes are behaving the same as the Fedora 12s. Searching through Firefox's Bugzilla kicked up one or two bug requests that were somewhat similar to this situation, but that was for versions pre-dating 3.3 and was with all gzipped content, whereas our Apache gzipped css and js files are being downloaded and displayed without error each time.
The fact that the content-length is coming back 30/31 bytes larger each time leads me to think that something is breaking inside my script/gzcompress() that is mangling something in the response that Firefox chokes on. Naturally, if you play with altering the echo'd gzip header, Firefox throws a "Content Encoding Error," so I'm really leaning towards the problem being internal to gzcompress().
Am I doomed? Do I have to scrap this implementation and use the not-preferred ob_start("ob_gzhandler") method?
I guess my "applies to more than one situation" question would be: Are there known bugs in the zlib compression library in PHP that does something funky when receiving very specific input?
Edit: Nuts. I readgzfile()'d one of the broken, non-compressed pages that Firefox downloaded and, lo and behold!, it echo'd everything back perfectly. =( That means this must be... Nope, I've got nothing.
okay 1st of all you don't seem to be setting the content length header, which will cause issues, instead, you are making the gzip content longer so that it matches the content length size that you were receiving in the 1st place. This is going to turn ugly. My suggestion is that you replace the lines
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
with
$compressed = gzcompress($contents, 9);
$compressed_length = strlen($compressed); /* contains no nulls i believe */
header("Content-length: $compressed_length");
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00", $compressed;
and see if it helps the situation.
Ding! Ding! Ding! After mulling over this problem all weekend, I finally stumbled across the answer after re-reading the PHP man pages for the umpteenth time... From the zlib PHP documentation, "Whether to transparently compress pages." Transparently! As in, nothing else is required to get PHP to compress its output once zlib.output_compression is set to "On". Yeah, embarrassing.
For reasons unknown, the code being called, explicitly, from the PHP script was compressing the already-compressed contents and the browser was simply unwrapping the one layer of compression and displaying the results. Curiously, the strlen() of the content didn't vary when output_compression was on or off, so the transparent compression must occur after the explicit compression, but it occasionally decided not to compress what was already compressed?
Regardless, everything is resolved by simply leaving PHP to its own devices. zlib doesn't need output buffering or anything to compress the output.
Hope this helps others struggling with the wonderful world of HTTP compression.

email inline images in base 64 through email with php code igniter

I am generating images dynamically at run time with php and have them displayed in base 64 through the browser.
This same page/html document I need emailed but this method to show images does not work through email.
I have found two code igniter libraries to try and help me with this task two of which do not work
http://thecodeabode.blogspot.com/2010/11/codeigniter-and-php-howto-embedding.html
http://codeigniter.com/wiki/Richmail/
The first one gives me
Unable to send email using PHP mail().
Your server might not be configured to
send mail using this method.
From: "SystemDDX" Return-Path:
Reply-To: "internal-33e#google.com"
X-Sender: internal-33ek#google.com
X-Mailer: CodeIgniter X-Priority: 3
(Normal) Message-ID:
<1d42c5c99deeb#google.com>
Mime-Version: 1.0 Content-Type:
text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
The second one is extremely outdated
The only method which I've managed to get working so far is using the code igniter
$this->email->attach("C:\myfile.jpg")
and then in my html reference using <img src='cid:myfile.jpg' />
The problem is my image is in base64 as I'm generating it dynamically so if I can't figure out anything else, how can I revert base 64 to a specific file name while keeping it in memory.
Please any tips and help would be greatly appreciated
The only method which I've managed to get working so far is ... <img src='cid:myfile.jpg' />
Good, because that's the way it should be done. As you have discovered, inlined base64 images are very poorly supported outside of bleeding-edge browsers.
The problem is my image is in base64 as I'm generating it dynamically so if I can't figure out anything else, how can I revert base 64 to a specific file name while keeping it in memory.
Well, how are you converting that image to base64 to begin with? You're certainly using base64_encode on the image's data, right?
Well then, don't do that!
Instead, write the image data to a new file on disk, and then reference that new image file when attaching it to the outgoing mail. Problem solved!
This assumes that the attachment method in your chosen email library only accepts real files -- read the docs, I'll bet that you can pass the image data directly and give it a "fake" file name at the same time.
You might want to consider using another modern mail library, like SwiftMailer. It can attach dynamic content smoothly and easily.
I'm generating sparklines graphics dynamically and using this to get the content:
function OutputToDataURI() {
ob_start(NULL,4096);
$this->Output();
header('Content-type: text/html');
return "data:image/png;base64,".base64_encode(ob_get_clean());
}
In that case, this will get your raw image data:
public function GetImageData() {
ob_start();
$this->Output();
return ob_get_clean();
}

Categories