I'm trying to read mp4 files with PHP, my initial code was
$file = 'https://s3-sa-east-1.amazonaws.com/onlytestes/video.mp4';
header('Content-type: video/mp4');
readfile($file);
But that way I couldn't use the length bar of the video, skip or even go back, until the video is 100% loaded.
Of course when I read the file directly (video.mp4) everything is fine.
Get the header of the Amazon request (curl) and forward it. Scrubbing will never work though.
UPDATE:
curl -I https://s3-sa-east-1.amazonaws.com/onlytestes/video.mp4
>>
HTTP/1.1 200 OK
x-amz-id-2: Ykt6rmYagUDTbKW+v2DR63Zb4ZmUJCM8ty7hO+Z/BU9DV5w1PTVEk+khHgMp+eoR7ExxzKy1Ius=
x-amz-request-id: 8F7A552FAB8D8B08
Date: Thu, 13 Jul 2017 08:12:26 GMT
Last-Modified: Wed, 12 Jul 2017 09:46:10 GMT
ETag: "adcafc77564f72b5e21574f4bfc4e927"
Accept-Ranges: bytes
Content-Type: video/mp4
Content-Length: **1386059** < to forward.
Server: AmazonS3
Related
I am currently outputting some image files via PHP's readfile() using the following code but, I notice via Firefox and Chrome's dev tools that none of these files get cached.
ob_start();
outputfile($fp);
function outputfile( $fp ) {
header("Content-Type: $mime_type");
header("Content-Length: " . filesize($fp));
header("Cache-Control: public, max-age=3600");
header("Etag: " . md5_file($fp));
$date = gmdate("D, j M Y H:i:s", filemtime($fp))." GMT";
header("Last-Modified: $date");
readfile($fp);
exit; // tried ob_end_flush() too before exiting
}
The code outputs the file with the following in the headers in the dev tools...
Cache-Control: public, max-age=2678400
Connection: keep-alive
Content-Length: 155576
Content-Type: image/jpeg
Date: Mon, 21 May 2018 22:31:02 GMT
Last-Modified: Sat, 03 Mar 2018 19:34:05 GMT
Etag: 507f2520385c009a7385a1165032bd61
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Pragma: no-cache
Server: nginx
If I return control to Nginx to serve the file instead, it outputs the following headers:
Accept-Ranges: bytes
Connection: keep-alive
Content-Length: 155576
Content-Type: image/jpeg
Date: Mon, 21 May 2018 22:31:02 GMT
ETag: "5a9af8ad-4a5b"
Last-Modified: Sat, 03 Mar 2018 19:34:05 GMT
Server: nginx
Am I missing something that causes the browsers to not cache the image files?
I've tried adding all the necessary Cache-Control headers such as eTag and max-age but, the browsers just refuses to cache the data. I even tried copying all the headers from the server's output and using "ob_start('ob_gzhandler');" in case it was because the raw file data wasn't gzipped.
The browsers just won't cache any file data sent through PHP.
Expires: Thu, 19 Nov 1981 08:52:00 GMT could be the cause. Technically if the Cache-Control header has a max-age directive then Expires should be ignored (Ref : https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expires). However it is worth checking by removing that header.
Your PHP code is not setting it. So I assume it is coming from some common config / code which gets executed on every outgoing response. Have you put this in your Nginx config for all PHP requests?
I think I've found the problem...
Was wondering if any cookie related code could affect readfile() and discovered that if I had session_start() before using the function, browsers will refuse to cache the file data sent. If I remove session_start() the browser caching works as expected respecting the Cache-Control headers sent.
I don't quite understand why this is the case since I compared the output of readfile() with and without session_start() before it and the output seems to be the same.
For the record I'm using PHP 5.5.
I'm having an issue where JPGs are seemingly being corrupted when served.
Curiously, its not all JPG images on my site, only about 5% of them. Here's what a corrupted one looks like:
The bottom half is cutoff. This is what jpeginfo on that file returns:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 Corrupt JPEG data: premature end of data segment Invalid JPEG file structure: two SOI markers [ERROR]
However, if I download the exact same image using wget, or just copy it directly off the server, it looks fine and appears not to be corrupted:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 [OK]
This is what curl -I returns:
HTTP/1.1 200 OK
Date: Wed, 08 Jul 2015 11:05:15 GMT
Server: LiteSpeed
Accept-Ranges: bytes
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
Last-Modified: Wed, 08 Jul 2015 08:58:42 GMT
Content-Type: image/jpeg
Content-Length: 40099
Access-Control-Allow-Origin: *
Cache-Control: public, max-age=604800
Expires: Wed, 15 Jul 2015 11:05:15 GMT
The server is Red Hat 4.4.7-4, the images have been uploaded via WordPress and resized with bfi_thumb
I try to write a php script to automatically download some file from a website, i use get_file_content all the response haedaers from that site, but i dont know to to save it as a file. The response header as shown as the screen image. if i access that url in browser the file will save into my computer but i cant use php script to save it.
is that possible to do it with script?
header image
thanks for helping me. i did not describe my problem very well, i want to save the attachment in that hearder, like in my header example Content-Disposition: attachment; filename=savedrecs.txt, i want to save that file into my computer
By making use of $http_response_header as the content and file_put_contents as the function for writing.
<?php
file_get_contents('http://www.stackoverflow.com'); //<--- Pass your website here
file_put_contents('test.txt',implode(PHP_EOL,$http_response_header)); //<--- Passing the $http_response_header as the text
OUTPUT :
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8
Location: http://stackoverflow.com/
Date: Wed, 26 Feb 2014 09:25:37 GMT
Connection: close
Content-Length: 148
HTTP/1.1 200 OK
Cache-Control: public, max-age=27
Content-Type: text/html; charset=utf-8
Expires: Wed, 26 Feb 2014 09:26:05 GMT
Last-Modified: Wed, 26 Feb 2014 09:25:05 GMT
Vary: *
X-Frame-Options: SAMEORIGIN
Date: Wed, 26 Feb 2014 09:25:38 GMT
Connection: close
Content-Length: 212557
I'm trying to parse an xml feed using PHP:
http://trustbox.trustpilot.com/r/travelnation.co.uk.xml
Visiting this, it looks perfectly OK, but when I try
<?php
$file = file_get_contents("http://trustbox.trustpilot.com/r/netamity.com.xml");
print_r($file);
?>
I get
‹•SÁŽÓ0=/ÿ`ŒÄmœ- 븊àèJV«••L«ŽmÙN²ý{Æi·M
...
How is it getting garbled? Using simplexml it wont parse it (unsurprisingly). I've tried setting headers UTF-8 headers but I think the issue is in the get_file_contents. Any ideas?
The content looks "weird" simply because the encoding is compressed (see the HTTP header Content-Encoding: gzip).
HTTP/1.1 200 OK
x-amz-id-2: 8wYarFnod0jtLJ3U8ZDN38102fjtG+EbwJjy0tY4YTZncrz9auEcQbzt1vyiSEhq
x-amz-request-id: A60F1E6CA5437776
Date: Sun, 24 Feb 2013 18:00:45 GMT
Content-Encoding: gzip
Last-Modified: Sun, 24 Feb 2013 05:19:11 GMT
ETag: "64eaa6f87768aeb3ae6741ba06318cb6"
Accept-Ranges: bytes
Content-Type: application/xhtml+xml
Content-Length: 52366
Server: AmazonS3
I guess what you need is to know how to read a file over HTTP; you could try this one on SO.
I am using wget in php script to download images from the url submitted by the user. Is there some way for me to determine the size of image before actually downloading it and restricting the size of download to 1mb? Also can I possibly check that the url points to an image only and not an entire website without downloading?
I dont want to end up filling my server with malware.
Before loading you can check headers (you'll have to download them though). I use curl - not wget. Here's an example:
$ curl --head http://img.yandex.net/i/www/logo.png
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 16 Jun 2012 09:46:36 GMT
Content-Type: image/png
Content-Length: 3729
Last-Modified: Mon, 26 Apr 2010 08:00:35 GMT
Connection: keep-alive
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Accept-Ranges: bytes
Content-Type and Content-Length should normally indicate that the image is ok