I am trying to implement support for Content-Range in PHP-generated files. When a browser sends Range request my script gives correct bytes and it works well.
But while testing how Content-Range looks when downloading a PDF from Apache server I realized the first request from a web browser to my server does not contain Range header but somehow server still doesn't return full file and only 32 kB.
On this screenshot you can see that Firefox sends 5 requests to Apache for my_pdf.pdf and Apache each time responds with 32-192 kB. The whole PDF is 28 MB. Requests 2-5 do contain Range request. But the first request- highlighted does not. You can see on the right that Content-Length is 28 MB but that Apache returned only 32 kB.
So my question is- how did Apache know to return only 32 kB and not the whole 28 MB PDF file?
So my question is- how did Apache know to return only 32 kB and not the whole 28 MB PDF file?
It didn't. If you look at the Content-Length header in the response, it shows the full file size of 29.3 million bytes.
The client probably closed the connection without reading the entire response.
Answer posted by #duskwuff is correct- Firefox terminates the transfer of the first requests once it gets enough to process the PDF.
Below is just a few details I discovered.
Firefox will terminate if your scripts returns these headers:
Accept-Ranges: bytes
Content-Length: 29293315
You can (but don't have to) also return this header:
header("Content-Range: bytes 0-29293314/29293315");
However by default Apache tries to compress whatever PHP returns and then adds this header:
Transfer-Encoding: chunked
And when Firefox (and Chrome) see this they won't close the connection. So I just disabled Apache compression and everything works. Now Firefox just does a few requests, get bits of PDF instead of the whole file and renders first page just fine (because it didn't need whole PDF to render just the first page).
Related
We're using a normal PHP download script (with headers etc) to serve files to users.
The issue however is that with some browsers and large downloads the download script is requested multiple times. NGINX logs show the requests with a 206 status code, (suggesting chunked streaming?) which is strange because we don't serve any streamable content?
Regardless, this means the download script is requested multiple times and thus the MySQL function of +1'ing the download counter for the file is run multiple times per download.
We tried using sessions, but seeing as the download is severed from an external server + domain we have no way to clear said sessions after they're set.
We're using Laravel with NGINX + MySQL, any help would be appreciated. Thanks!
Looking at the spec and the headers for the request which would ultimately result in a 206 response, there was one header which struck out which looks like it would be perfect.
The header in question is the Content-Range header which could look like the following:
Content-Range: bytes 21010-47021/47022
What this is saying is it wants to grab bytes 21010-47021 out of 47022 bytes. All you should need to be worried about is the first number here and if it's 0 or not. If the header was set and the first number is 0, you can assume it's just beginning the download and you should increment the counter.
I've got some function that takes very long time to execute (downloading some external images in my case) and I want to avoid execution time exceeded error.
Is there any way to avoid this (for example by dividing downloading of single images into single php 'threads' or something like that) ?
I cannot change execution time limit or any of ini settings.
I'm not able to use cron works as it'd be used in WordPress theme and I can't control platform of end user.
One of the possibilities is to make a PHP script that downloads one external image, and let that script be called using Ajax. Then you can build a user interface with JavaScript which calls this PHP script for each image, one by one. It could show some progress bar depending on how many images have been downloaded already.
Yes you can. But you will have to host or proxy the images you want to download by chunk if the remote server does not understand partial transfer downloads.
Then you will have to make your PHP script request the image by chunks to the server
Request
GET /proxy/?url=http://example2.com/myimage.jpg HTTP/1.1
Host: www.example.com
Range: bytes=200-1000
Answer
HTTP/1.1 206 Partial Content
Date: Tue, 17 Feb 2015 10:50:59 GMT
Accept-ranges: bytes
Content-range: bytes 200-1000/6401
Content-type: image/jpeg
Content-length: 800
You will have many choice to call your php script enough times to get all the chunks : automatically refresh the page, ajax request, ...
I have a client application that sends the data to a php file (hosted on Apache). Usually this works without any problem. On a client site I get 206 partial content every time the client app sends data.
The data size is 10 - 30 kB so it is not huge.
If you have any suggestion - like changing Apache settings .. or something similar I would appreciate it.
Thanks.
Its not an issue. Any 2xx code means "Success". You can view details # Why does Firebug show a "206 Partial Content" response on a video loading request?
I am having an hybrib application in that in that a simple php page will open which contents some link of files, and from my android wrapper i have implemented the download functionality of file.
So for user convenience i am showing the length and progress of download while the file is downloading for that my application server has set a content-length header to pass the size on device, but the problem I am facing is surprising.
The file length is working fine in Android 2.2. I am getting the content header correctlt but in Android 2.3 above I am getting the content length for smaller files but for the larger file I am not even getting the Header Field.
con.getHeaderField("content-length");
returning me null in case of Android 2.3 above.
So is there any limitation of size for the User Agent above 2.3 because if it is working fine in 2.2 means there is no problem at server end it is the problem only on device user agent.
Update
I have tried it with different size of files and it is working fine till 60KB in Android 2.3 above as well.
It sounds like the client may be chunking the file. Check for the presence of the following header:
Transfer-Encoding: chunked
If that exists, the request is chunked and you will not get a Content-Length header.
See http://en.wikipedia.org/wiki/Chunked_transfer_encoding for more details.
Here's a strange one:
I've got nginx reverse proxying requests to apache 2 with mod_php.
A user (using firefox 3.1b3) reported that recently, he's started getting sporadic "What should firefox do with this file?" popups during normal navigation. We haven't had any other reports of this issue, and haven't been able to reproduce it ourselves.
I checked Nginx and apache's logs. Nothing in the error logs, and they both show a normal HTTP 200 for the request.
I had him send me the downloaded file, and it's generated HTML, as it should be -- except it has some trailing and leading bytes tacked on.
The opening byte sequence is the magic gzip header: 1F8B08
Here are the opening characters, C-escaped for convenience:
\x1F\x8B\x089608\r\n<!DOCTYPE HTML ...
and the file ends with:
...</html>\n\r\n0\r\n\r\n
When I fetch the same URL via wget, it starts with as expected; the mysterious opening and closing bytes are nowhere to be seen.
Has anyone ever seen anything similar to this? Could this be a FF 3.1b3 bug?
Never seen an issue exactly like that, but I did have an issue once with a transparent proxy that would claim to the web server that it could handle gzip compressed content when, in fact, it received the gzipped content from the server, stripped the gzip headers without decompressing it, and sent the result to the browser. The behavior we saw was what you describe: a save/open file dialog for what should have been a normal web page. In this case the browser in question was IE.
I'm not sure if you're problem is related, but as an experiment, you could look at the requests between the proxy and Apache and see if they are gzipped, or else turn off the gzip compression for requests in Apache and see if that fixes the issue. If so, then you probably have a problem with gzip handling in your proxy.
wget doesn't request a compressed response. Try:
curl --compressed <URL>
You could also try adding a -v to print the response headers, and check that a sensible Content-Type is being returned.