Downloading using wget in php - php

I am using wget in php script to download images from the url submitted by the user. Is there some way for me to determine the size of image before actually downloading it and restricting the size of download to 1mb? Also can I possibly check that the url points to an image only and not an entire website without downloading?
I dont want to end up filling my server with malware.

Before loading you can check headers (you'll have to download them though). I use curl - not wget. Here's an example:
$ curl --head http://img.yandex.net/i/www/logo.png
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 16 Jun 2012 09:46:36 GMT
Content-Type: image/png
Content-Length: 3729
Last-Modified: Mon, 26 Apr 2010 08:00:35 GMT
Connection: keep-alive
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Accept-Ranges: bytes
Content-Type and Content-Length should normally indicate that the image is ok

Related

How to solve it? Mp4 video progress bar (PHP)

I'm trying to read mp4 files with PHP, my initial code was
$file = 'https://s3-sa-east-1.amazonaws.com/onlytestes/video.mp4';
header('Content-type: video/mp4');
readfile($file);
But that way I couldn't use the length bar of the video, skip or even go back, until the video is 100% loaded.
Of course when I read the file directly (video.mp4) everything is fine.
Get the header of the Amazon request (curl) and forward it. Scrubbing will never work though.
UPDATE:
curl -I https://s3-sa-east-1.amazonaws.com/onlytestes/video.mp4
>>
HTTP/1.1 200 OK
x-amz-id-2: Ykt6rmYagUDTbKW+v2DR63Zb4ZmUJCM8ty7hO+Z/BU9DV5w1PTVEk+khHgMp+eoR7ExxzKy1Ius=
x-amz-request-id: 8F7A552FAB8D8B08
Date: Thu, 13 Jul 2017 08:12:26 GMT
Last-Modified: Wed, 12 Jul 2017 09:46:10 GMT
ETag: "adcafc77564f72b5e21574f4bfc4e927"
Accept-Ranges: bytes
Content-Type: video/mp4
Content-Length: **1386059** < to forward.
Server: AmazonS3

How does HTTP max-age works and how to expire cache after some time?

I manage the HTTP caching in my applications. And it's not working as I think it should. Let's get to an actual example:
With the first serve of my PHP page I serve the following HTTP headers:
HTTP/1.1 200 OK
Date: Mon, 12 Dec 2016 16:39:33 GMT
Server: Apache/2.4.9 (Win64) PHP/5.5.12
Expires: Tue, 01 Jan 1980 19:53:00 GMT
Cache-Control: private, max-age=60, pre-check=60
Last-Modified: Mon, 12 Dec 2016 15:57:25 GMT
Etag: "a2883c859ce5c8153d65a4e904c40a79"
Content-Language: en
Content-Length: 326
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=UTF-8
My application manage the validation of Etags and send 304 if nothing has changed and when you refresh the page in the browser (F5) you get (if nothing has changed server side):
HTTP/1.1 304 Not Modified
Date: Mon, 12 Dec 2016 16:43:10 GMT
Server: Apache/2.4.9 (Win64) PHP/5.5.12
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
Since I serve Cache-Control: private with max-age=60 I would expect that after one minute the cache will be considered obsolete by the browser and it will request a fresh copy (equivalent of a Ctrl+F5 reload) but instead the cache is still valid several days after it's max-age.
Do I misunderstood these HTTP mechanism? Do I send something wrong or maybe miss something?
If a cached response is within the max-age, then it is considered fresh.
If it exceeds the max-age, then it is considered stale.
If a browser needs a resource and it has a fresh copy in the cache, then it will use that without checking back with the server.
If the browser has a stale copy then it will validate that against the server (in this case, using Etags) to see if it needs a new copy of it the cached copy is still OK.

Web server corrupting JPG image

I'm having an issue where JPGs are seemingly being corrupted when served.
Curiously, its not all JPG images on my site, only about 5% of them. Here's what a corrupted one looks like:
The bottom half is cutoff. This is what jpeginfo on that file returns:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 Corrupt JPEG data: premature end of data segment Invalid JPEG file structure: two SOI markers [ERROR]
However, if I download the exact same image using wget, or just copy it directly off the server, it looks fine and appears not to be corrupted:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 [OK]
This is what curl -I returns:
HTTP/1.1 200 OK
Date: Wed, 08 Jul 2015 11:05:15 GMT
Server: LiteSpeed
Accept-Ranges: bytes
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
Last-Modified: Wed, 08 Jul 2015 08:58:42 GMT
Content-Type: image/jpeg
Content-Length: 40099
Access-Control-Allow-Origin: *
Cache-Control: public, max-age=604800
Expires: Wed, 15 Jul 2015 11:05:15 GMT
The server is Red Hat 4.4.7-4, the images have been uploaded via WordPress and resized with bfi_thumb

Browser not automatically decompressing gz file

I have the same website with the same css file being gzipped and served on two separate servers. Viewing the site on one server, the browser properly decompresses it, and uses the styling. But on the other, the browser does not decompress the file. I thought perhaps this is something to do with the headers, but all the resources I've found seem to think the Content-Type and Content-Encoding are the only two headers that matter for decompressing gzip, and those are the same on both servers. Is there another response header that is incorrect?
The working response headers for the .css.gz file:
HTTP/1.1 200 OK
Cache-Control: public, max-age=604800, must-revalidate
Accept-Ranges: bytes
Content-Type: text/css
Age: 353722
Date: Tue, 07 Apr 2015 21:44:23 GMT
Last-Modified: Tue, 29 Oct 2013 17:44:18 GMT
Expires: Fri, 10 Apr 2015 19:29:01 GMT
Content-Length: 33130
Connection: keep-alive
The response headers for the .css.gz file that don't seem to work:
HTTP/1.1 200 OK
Date: Wed, 08 Apr 2015 15:14:11 GMT
Content-Type: text/css
Last-Modified: Tue, 07 Apr 2015 22:42:25 GMT
Transfer-Encoding: chunked
Connection: close
Content-Encoding: gzip

php download a txt file in a response headers

I try to write a php script to automatically download some file from a website, i use get_file_content all the response haedaers from that site, but i dont know to to save it as a file. The response header as shown as the screen image. if i access that url in browser the file will save into my computer but i cant use php script to save it.
is that possible to do it with script?
header image
thanks for helping me. i did not describe my problem very well, i want to save the attachment in that hearder, like in my header example Content-Disposition: attachment; filename=savedrecs.txt, i want to save that file into my computer
By making use of $http_response_header as the content and file_put_contents as the function for writing.
<?php
file_get_contents('http://www.stackoverflow.com'); //<--- Pass your website here
file_put_contents('test.txt',implode(PHP_EOL,$http_response_header)); //<--- Passing the $http_response_header as the text
OUTPUT :
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8
Location: http://stackoverflow.com/
Date: Wed, 26 Feb 2014 09:25:37 GMT
Connection: close
Content-Length: 148
HTTP/1.1 200 OK
Cache-Control: public, max-age=27
Content-Type: text/html; charset=utf-8
Expires: Wed, 26 Feb 2014 09:26:05 GMT
Last-Modified: Wed, 26 Feb 2014 09:25:05 GMT
Vary: *
X-Frame-Options: SAMEORIGIN
Date: Wed, 26 Feb 2014 09:25:38 GMT
Connection: close
Content-Length: 212557

Categories