I'm using a php script to upload a file to Amazon S3
$s3->putObjectFile($url,$bucket,$filename, S3::ACL_PRIVATE,array(),array("Content-Type" => "application/json"));
Immediately after the upload, I need to get some information from the file and I call it through Cloudfront but I see that the file is available only after some minutes.
Do you know why and if there is a way to sort this out?
Is there a way to have the file immediately available?
Thank you
It was immediately available for me with this test...
$ date;aws s3 cp foo s3://BUCKET/foo;date;curl -I http://dxxx.cloudfront.net/foo;date
Thu Aug 6 04:30:01 UTC 2015
upload: ./foo to s3://BUCKET/foo
Thu Aug 6 04:30:02 UTC 2015
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 24
Connection: keep-alive
Date: Thu, 06 Aug 2015 04:30:03 GMT
x-amz-version-id: null
Last-Modified: Thu, 06 Aug 2015 04:30:03 GMT
ETag: "5089fc210a3df99a6a04fe64c929d8e7"
Accept-Ranges: bytes
Server: AmazonS3
X-Cache: Miss from cloudfront
Via: 1.1 fed2ed9d8e7c2b37559eac3b2a2386fc.cloudfront.net (CloudFront)
X-Amz-Cf-Id: gEaAkLCqstyAdakXkBvMmSJVrfKjKTD5K9WoyZBHvVZufWuuhclNLQ==
Thu Aug 6 04:30:02 UTC 2015
Perhaps your upload is happening asynchronously and the upload hadn't completed yet?
Related
I'm having an issue where JPGs are seemingly being corrupted when served.
Curiously, its not all JPG images on my site, only about 5% of them. Here's what a corrupted one looks like:
The bottom half is cutoff. This is what jpeginfo on that file returns:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 Corrupt JPEG data: premature end of data segment Invalid JPEG file structure: two SOI markers [ERROR]
However, if I download the exact same image using wget, or just copy it directly off the server, it looks fine and appears not to be corrupted:
FS0005-2yme9un7m1rme75z1ek074.jpg 250 x 250 24bit JFIF N 40099 [OK]
This is what curl -I returns:
HTTP/1.1 200 OK
Date: Wed, 08 Jul 2015 11:05:15 GMT
Server: LiteSpeed
Accept-Ranges: bytes
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
Last-Modified: Wed, 08 Jul 2015 08:58:42 GMT
Content-Type: image/jpeg
Content-Length: 40099
Access-Control-Allow-Origin: *
Cache-Control: public, max-age=604800
Expires: Wed, 15 Jul 2015 11:05:15 GMT
The server is Red Hat 4.4.7-4, the images have been uploaded via WordPress and resized with bfi_thumb
I have the same website with the same css file being gzipped and served on two separate servers. Viewing the site on one server, the browser properly decompresses it, and uses the styling. But on the other, the browser does not decompress the file. I thought perhaps this is something to do with the headers, but all the resources I've found seem to think the Content-Type and Content-Encoding are the only two headers that matter for decompressing gzip, and those are the same on both servers. Is there another response header that is incorrect?
The working response headers for the .css.gz file:
HTTP/1.1 200 OK
Cache-Control: public, max-age=604800, must-revalidate
Accept-Ranges: bytes
Content-Type: text/css
Age: 353722
Date: Tue, 07 Apr 2015 21:44:23 GMT
Last-Modified: Tue, 29 Oct 2013 17:44:18 GMT
Expires: Fri, 10 Apr 2015 19:29:01 GMT
Content-Length: 33130
Connection: keep-alive
The response headers for the .css.gz file that don't seem to work:
HTTP/1.1 200 OK
Date: Wed, 08 Apr 2015 15:14:11 GMT
Content-Type: text/css
Last-Modified: Tue, 07 Apr 2015 22:42:25 GMT
Transfer-Encoding: chunked
Connection: close
Content-Encoding: gzip
When an ajax request is made either it takes from the browser cache or from server based on the headers set.
But i want to detect if the response data is taken from browser cache or from the server.
I am using jquery ajax function.
What i have done is i am getting the response headers from server
Date: Tue, 17 Jun 2014 10:35:31 GMT
Server: Apache/2.2.15 (CentOS)
Connection: close
Content-Type: text/html; charset=UTF-8
X-Powered-By: PHP/5.5.8
Transfer-Encoding: chunked
Expires: Tue, 17 Jun 2014 10:36:32 GMT
and from the browser cache before expiry i get this
Date: Tue, 17 Jun 2014 10:35:31 GMT
Server: Apache/2.2.15 (CentOS)
Content-Type: text/html; charset=UTF-8
X-Powered-By: PHP/5.5.8
Expires: Tue, 17 Jun 2014 10:36:32 GMT
Basically i look for Transfer-Encoding and decide whether its from cache or server. But i doubt whether it is compatible across browsers.
is their any other method that i identify the response data is from cache or server?
I am using wget in php script to download images from the url submitted by the user. Is there some way for me to determine the size of image before actually downloading it and restricting the size of download to 1mb? Also can I possibly check that the url points to an image only and not an entire website without downloading?
I dont want to end up filling my server with malware.
Before loading you can check headers (you'll have to download them though). I use curl - not wget. Here's an example:
$ curl --head http://img.yandex.net/i/www/logo.png
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 16 Jun 2012 09:46:36 GMT
Content-Type: image/png
Content-Length: 3729
Last-Modified: Mon, 26 Apr 2010 08:00:35 GMT
Connection: keep-alive
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
Accept-Ranges: bytes
Content-Type and Content-Length should normally indicate that the image is ok
I'm trying to figure why the Content-Length header of php gets overwritten.
This is demo.php
<?php
header("Content-Length: 21474836470");die;
?>
a request to fetch the headers
curl -I http://someserver.com/demo.php
HTTP/1.1 200 OK
Date: Tue, 19 Jul 2011 13:44:11 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze3
Content-Length: 2147483647
Cache-Control: must-revalidate
Content-Type: text/html; charset=UTF-8
See Content-Length ? It maxes out at 2147483647 bytes, that is 2GB.
Now if modify demo.php like so
<?php
header("Dummy-header: 21474836470");die;
?>
the header is not overwritten.
HTTP/1.1 200 OK
Date: Tue, 19 Jul 2011 13:49:11 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze3
Dummy-header: : 21474836470
Cache-Control: must-revalidate
Content-Type: text/html; charset=UTF-8
Here are the modules loaded
root#pat:/etc/apache2# ls /etc/apache2/mods-enabled/
alias.conf authz_host.load dav_fs.load expires.load php5.conf reqtimeout.load status.conf
alias.load authz_user.load dav.load headers.load php5.load rewrite.load status.load
auth_basic.load autoindex.conf dav_lock.load mime.conf proxy.conf setenvif.conf
authn_file.load autoindex.load dir.conf mime.load proxy_http.load setenvif.load
authz_default.load cgi.load dir.load negotiation.conf proxy.load ssl.conf
authz_groupfile.load dav_fs.conf env.load negotiation.load reqtimeout.conf ssl.load
Here is a phpinfo() : http://pastehtml.com/view/b0z02p8zc.html
Apache does support files over 2GB, as I don't have any problem accessing large file directly :
curl -I http://www.someserver.com/somehugefile.zip (5.3 Gig)
HTTP/1.1 200 OK
Date: Tue, 19 Jul 2011 14:00:25 GMT
Server: Apache/2.2.16 (Debian)
Last-Modified: Fri, 15 Jul 2011 08:50:22 GMT
ETag: "301911-1548e4b11-4a817bd63ef80"
Accept-Ranges: bytes
Content-Length: 5713578769
Cache-Control: must-revalidate
Content-Type: application/zip
Here is a uname -a
Linux pat.someserver.com 2.6.38.2-grsec-xxxx-grs-ipv6-32 #1 SMP Fri Apr 15 17:41:28 UTC 2011 i686 GNU/Linux
Hope somebody can help !
cheers
Seems like php cast Content-length to int
Yes it's certainly a 32 bits thing. Well I don't want to tweak PHP, recompile or something, so for the time being, I will check the file size, and if it's over 2GB, I'm not sending the header.
thank you all for your input