Ogg audio served from PHP-file unseekable on Chrome - php

I'm developing a basic audio player using HTML5 audio element. All served files are .ogg coming from another system as a binary string through an API and printed out by a simple PHP handler (which includes some basic security checking) and there's really nothing I can do about it.
However, while everything works nicely on Firefox, Chrome has some issues regarding seeking, skipping or replaying (simply put: it can't be done). Getting audio duration from the audio element returns Infinity, and while currentTime returns correct value, it can't be manipulated.
I started fiddling with headers and response codes and nothing seems to work properly. Using response code 206 for partial content or declaring Accept-Ranges in header causes audio element to go disabled as it does when the source file doesn't exist.
Searching for proper headers didn't yield much results, since it always was about partial content and X-Content-Duration (which is a PITA to calculate because of VBR), which did absolutely nothing on Chrome.
Here are the relevant headers which work just fine on FF. Did I make a major mistake on something, or is there just some issue with Chrome?
HTTP/1.1 200 OK (changing this to 206 makes the file unplayable in Chrome)
Cache-Control: public, no-store
Content-Disposition: attachment; filename="redacted.ogg"
Content-Length: 123456
X-Content-Duration: 12 (this does nothing on Chrome, works on FF)
Content-Range: bytes 0-123455/123456
Accept-Ranges: bytes (...as does including this)
Content-Transfer-Encoding: binary
Content-Type: audio/ogg
Expires: 0
Pragma: public
Edit: some kind of behaviour on Opera, probably on all Webkit browsers. Also doesn't work in fread/file_get_contents type of situations.

Alright, dumb me, it really was due the partial content. Chrome didn't really like the idea of taking the whole file at once (and caching it like FF), so to enable seeking and whatnot it needed Accept-Ranges: bytes, a bit of incoming header parsing to get the requested byte range and couple of lines which return that aforementioned byte range with correct partial content status (206), not just "bytes 0-(filesize-1)/filesize" (whole file), even if the first request asks for it (bytes=0-).
Hope this helps someone else as well.

Related

PHPExcel: I need to create two workbooks on one submission [duplicate]

Use case: user clicks the link on a webpage - boom! load of files sitting in his folder.
I tried to pack files using multipart/mixed message, but it seems to work only for Firefox
This is how my response looks like:
HTTP/1.0 200 OK
Connection: close
Date: Wed, 24 Jun 2009 23:41:40 GMT
Content-Type: multipart/mixed;boundary=AMZ90RFX875LKMFasdf09DDFF3
Client-Date: Wed, 24 Jun 2009 23:41:40 GMT
Client-Peer: 127.0.0.1:3000
Client-Response-Num: 1
MIME-Version: 1.0
Status: 200
--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="001.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
Content-type: image/jpeg
Content-transfer-encoding: binary
Content-disposition: attachment; filename="002.jpg"
<< here goes binary data >>--AMZ90RFX875LKMFasdf09DDFF3
--AMZ90RFX875LKMFasdf09DDFF3--
Thank you
P.S. No, zipping files is not an option
Zipping is the only option that will have consistent result on all browsers. If it's not an option because you don't know zips can be generated dynamically, well, they can. If it's not an option because you have a grudge against zip files, well..
MIME/multipart is for email messages and/or POST transmission to the HTTP server. It was never intended to be received and parsed on the client side of a HTTP transaction. Some browsers do implement it, some others don't.
As another alternative, you could have a JavaScript script opening windows downloading the individual files. Or a Java Applet (requires Java Runtimes on the machines, if it's an enterprise application, that shouldn't be a problem [as the NetAdmin can deploy it on the workstations]) that would download the files in a directory of the user's choice.
Remember doing this >10 years ago in the netscape 4 days. It used boundaries like what your doing and didn't work at all with other browsers at that time.
While it does not answer your question HTTP 1.1 supports request pipelining so that at least the same TCP connection can be reused to download multiple images.
You can use base64 encoding to embed an (very small) image into a HTML document, however from a browser/server standpoint, you're technically still sending only 1 document. Maybe this is what you intend to do?
Embedd Images into HTML using Base64
EDIT: i just realized that most methods i found in my google search only support firefox, and not iE.
You could make a json with multiple data urls.
Eg:
{
"stamp.png": "data:image/png;base64,...",
"document.pdf": "data:application/pdf;base64,..."
}
(extending trinalbadger587's answer)
You could return an html with multiple clickable, downloadable, inplace data links:
<html>
<body>
<a download="yourCoolFilename.png" href="data:image/png;base64,...">PNG</a>
<a download="theFileGetsSavedWithThisName.pdf" href="data:application/pdf;base64,...">PDF</a>
</body>
</html>

PHP function file_get_contents retrieves encoded information - header: "'Content-Type: image/png'"

Hi everyone I am having a bit of a problem related to the php function file_get_contents.
I used it many times and no problems but when I am trying to get some information from a particular site the information I get when I echo the result is pretty much encoded (Example: ���IHDR�).
I looked at the header of the site and instead of saying
Content-Type: text/html;
it is saying
Content-Type: image/png
How do I decode that so I can get the source code (html) of the site? The web-site when I go to it in a browser, it looks like a regular web-site: text, images nothing out of ordinary.
When I look at the source code nothing out of ordinary there either. But when I do a file_get_contents I do not get the source code like I used to get on other websites.
Any ideas?
Note: I had the same problem in the past it was encoded in GZIP and I was able to find a function to decode it but with Content-Type: image/png I do not know how to proceed.
Why not, create a basic test script to the output the returned image, tho I suspect its an image saying:
Stop scrapping my site!!! Yada Yada
header('Content-Type: image/png');
echo file_get_contents('http://example.com');
The Content-Type header tells you which content-type the requested file has, in your case it is a PNG image (image/png).
You find a description of many content-types (written in a so called mime-type specification) online, this is a nice list: fileformat.info MIME types.
As you might can imagine, it's not possible to display an image in text-form (at least not before converting it to ascii art) so you will not have much luck this time.
Check the URI if it is really the one you wanted to obtain.

Chrome cannot recognize url encoded filename while downloading

I tried to download a file with filename GB2312%D5%D5%C6%AC.JPG from my site, everything goes well in IE9/Firefox, but not in Google Chrome.
In Google Chrome, this filename is replaced by my download page name (Maybe Chrome is failed to decode the filename).
To find out if Chrome is tring to decode filename, I tried to use another string GB2312%2C%2D%2E.txt as the filename, firefox/IE9 still work as expected, but Google Chrome will try to decode this filename (replace %2D with '-').
How to prevent Google Chrome from decoding filename? Better if I can solve this problem at my server side (PHP)
The following lines are response headers generated by my server.
**Response Headers:**
Cache-Control:must-revalidate, post-check=0, pre-check=0, private
Connection:keep-alive
Content-Description:File Transfer
Content-Disposition:attachment; filename="GB2312%D5%D5%C6%AC.JPG"
Content-Length:121969
Content-Transfer-Encoding:binary
Content-Type:application/force-download; charset=UTF-8
Date:Wed, 18 Apr 2012 03:32:30 GMT
Expires:0
Pragma:public
Server:nginx/1.1.5
X-Powered-By:PHP/5.3.8
I just had this very problem. This question was helpful.
The cause is, as you pointed out, that Chrome is trying to decode it and it's outside of ASCII. I would call it a bug, personally.
Basically the answer to get it working in Chrome is to use this:
Content-Disposition:attachment; filename*=UTF-8''GB2312%D5%D5%C6%AC.JPG
However, this will break it in IE8 and Safari. Ugh! So to get it working in those as well, try doing it like this example.
Content-Disposition:attachment; filename="GB2312%D5%D5%C6%AC.JPG"; filename*=UTF-8''GB2312%D5%D5%C6%AC.JPG
For me, I had to url encode the file name before putting it in the filename*=UTF-8 part. So my values for the first file name and the second one aren't the same. I used PHP and my code looks like this:
$encodedfilename = urlencode($downloadfilename);
header("Content-Disposition: attachment; filename=\"$downloadfilename\"; filename*=UTF-8''$encodedfilename");
So I'm not actually stopping it from encoding like you asked for, but I encode it and then pass the parameter that gets decoded by Chrome, back to what it's supposed to be.

gzcompress() randomly inserting extra data?

I've been researching this all morning and have decided that as a last-ditch effort, maybe someone on Stack Overflow has a "been-there, done-that" type of answer for me.
Background Recently, I implemented compression on our (intranet-oriented) Apache (2.2) server using filters so that all text-based files are compressed (css, js, txt, html, etc.) via mod_deflate, mentioning nothing about php scripts. After plenty of research on how best to compress PHP output, I decided to use the gzcompress() flavor because the PHP documentation suggests that using zlib library and gzip (using the deflate algorithm, blah blah blah) is preferred over ob_gzipwhatever().
So I copied someone else's method like so:
<?php # start each page by enabling output buffering and disabling automatic flushes
ob_start();ob_implicit_flush(0);
(program logic)
print_gzipped_page();
function print_gzipped_page() {
if (headers_sent())
$encoding = false;
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'x-gzip') !== false )
$encoding = 'x-gzip';
elseif(strpos($_SERVER['HTTP_ACCEPT_ENCODING'],'gzip') !== false )
$encoding = 'gzip';
else
$encoding = false;
if($encoding){
$contents = ob_get_contents(); # get contents of buffer
ob_end_clean(); # turn off OB and flush buffer
$size = strlen($contents);
if ($size < 512) { # too small to be worth a compression
echo $contents;
exit();
} else {
header("Content-Encoding: $encoding");
header('Vary: Accept-Encoding');
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
exit();
}
} else {
ob_end_flush();
exit();
}
}
Pretty standard stuff, right?
Problem Between 10% and 33% of all our PHP page requests sent via Firefox go out fine and come back g-zipped, only Firefox displays the compressed ASCII in lieu of decompressing it. AND, the weirdest part, is that the content size sent back is always 30 or 31 bytes larger than the size of the page correctly rendered. As in, when the script is displayed properly, Firebug shows content size of 1044; when Firefox shows a huge screen of binary gibberish, Firebug shows a content size of 1074.
This happened to some of our users on legacy 32-bit Fedora 12s running Firefox 3.3s... Then it happened to a user with FF5, one with FF6, and some with the new 7.1! I've been meaning to upgrade them all to FF7.1, anyway, so I've been updating them as they have issues, but FF7.1 is still exhibiting the same behavior, just less frequently.
Diagnostics I've been installing Firebug on a variety of computers to watch the headers, and that's where I'm getting confused:
Normal, functioning page response headers:
HTTP/1.1 200 OK
Date: Fri, 21 Oct 2011 18:40:15 GMT
Server: Apache/2.2.15 (Fedora)
X-Powered-By: PHP/5.3.2
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Encoding: gzip
Vary: Accept-Encoding
Content-Length: 1045
Keep-Alive: timeout=10, max=75
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
(Notice that content-length is generated automatically)
Same page when broken:
HTTP/1.1 200 OK
(everything else identical)
Content-Length: 1075
The sent headers always include Accept-Encoding: gzip, deflate
Things I've tried to fix the behavior:
Explicitly declare content length with uncompressed and compressed lengths
Not use the substr() of $contents
Remove checksum at the end of $contents
I don't really want to use gzencode because my testing showed it to be significantly slower (9%) than gzcompress, presumably because it's generating extra checksums and whatnot that I (assumed) the web browsers don't need or use.
I cannot duplicate the behavior on my 64-bit Fedora 14 box running Firefox 7.1. Not once in my testing before rolling the compression code live did this happen to me, neither in Chrome nor Firefox. (Edit: Immediately after posting this, one of the windows I'd left open that sends meta refreshes every 30 seconds finally broke after ~60 refreshes in Firefox) Our handful of Windows XP boxes are behaving the same as the Fedora 12s. Searching through Firefox's Bugzilla kicked up one or two bug requests that were somewhat similar to this situation, but that was for versions pre-dating 3.3 and was with all gzipped content, whereas our Apache gzipped css and js files are being downloaded and displayed without error each time.
The fact that the content-length is coming back 30/31 bytes larger each time leads me to think that something is breaking inside my script/gzcompress() that is mangling something in the response that Firefox chokes on. Naturally, if you play with altering the echo'd gzip header, Firefox throws a "Content Encoding Error," so I'm really leaning towards the problem being internal to gzcompress().
Am I doomed? Do I have to scrap this implementation and use the not-preferred ob_start("ob_gzhandler") method?
I guess my "applies to more than one situation" question would be: Are there known bugs in the zlib compression library in PHP that does something funky when receiving very specific input?
Edit: Nuts. I readgzfile()'d one of the broken, non-compressed pages that Firefox downloaded and, lo and behold!, it echo'd everything back perfectly. =( That means this must be... Nope, I've got nothing.
okay 1st of all you don't seem to be setting the content length header, which will cause issues, instead, you are making the gzip content longer so that it matches the content length size that you were receiving in the 1st place. This is going to turn ugly. My suggestion is that you replace the lines
# 8-byte file header: g-zip file (1f 8b) compression type deflate (08), next 5 bytes are padding
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
$contents = gzcompress($contents, 9);
$contents = substr($contents, 0,$size); # faster than not using a substr, oddly
echo $contents;
with
$compressed = gzcompress($contents, 9);
$compressed_length = strlen($compressed); /* contains no nulls i believe */
header("Content-length: $compressed_length");
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00", $compressed;
and see if it helps the situation.
Ding! Ding! Ding! After mulling over this problem all weekend, I finally stumbled across the answer after re-reading the PHP man pages for the umpteenth time... From the zlib PHP documentation, "Whether to transparently compress pages." Transparently! As in, nothing else is required to get PHP to compress its output once zlib.output_compression is set to "On". Yeah, embarrassing.
For reasons unknown, the code being called, explicitly, from the PHP script was compressing the already-compressed contents and the browser was simply unwrapping the one layer of compression and displaying the results. Curiously, the strlen() of the content didn't vary when output_compression was on or off, so the transparent compression must occur after the explicit compression, but it occasionally decided not to compress what was already compressed?
Regardless, everything is resolved by simply leaving PHP to its own devices. zlib doesn't need output buffering or anything to compress the output.
Hope this helps others struggling with the wonderful world of HTTP compression.

How to display an image returned as a bytes in browser without prompting download?

I have written the following PHP function but still get the prompt to download the file:
function navigateToBytes($contentType, $bytes){
header('Content-Type: ' .$contentType);
//header('Content-Transfer-Encoding: binary'); // UPDATE: as pointed out this is not needed, though it does not solve the problem
header('Content-Length: '.strlen($bytes));
ob_clean();
flush();
echo $bytes;
}
An example of calling the function:
navigateToBytes('image/jpeg', $bytes); // UPDATE: turns out this does work, using image/tiff for tiff images is when the browser does not display the image
where $bytes are the bytes as read from the file.
Apologies all - turns out I was having the problem because the images I was testing were TIFF's (with the Content-Type correctly set to image/tiff) when I used a JPEG the browser would display the image!
Ultimately it is up to the browser to decide whether it can display the Content-Type you are sending.
For the record the only headers I needed to change was
Content-Type,
I should set
Content-Length
too unless I set
Transfer-Encoding: chunked
Try the HTTP header "Content-Disposition: Inline", however some browsers may try to save the user from seeing binary data. Here is a random blog article on that HTTP header:
http://dotanything.wordpress.com/2008/05/30/content-disposition-attachment-vs-inline/
That seems like correct behavior to me. The browser is a viewport for humans to view things in. Humans, by and large, don't want to view binary data. What do you think should happen?
Random Advice: If there's a site that's doing what you want to do, use curl to sniff the headers they're sending.
curl -I http://example.com/path/to/binary/file/that/displays/in/browser
and then use the exact same headers in your own script.
As a start, get rid of things that do not exist in HTTP (Content-Transfer-Encoding).
Then get an HTTP tracing tool, such as the Live HTTP headers plugin for Firefox, and compare "your" headers with those received for a working image.
In doubt, post the HTTP trace here.

Categories