Upload to AWS S3 limits - php

I'm working with the Amazon S3 and the AWS SDK for PHP. Is there a file size limit to upload files? Is there a simultaneous files upload limit?
It has given me a lot of these errors when I try to send 20 files with 200MB at a time for my bucket:
RequestTimeTooSkewedException: AWS Error Code: RequestTimeTooSkewed,
Status Code: 403, AWS Request ID: 0CE24AEDE4162AC9, AWS Error Type:
client, AWS Error Message: The difference between the request time and
the current time is too large.
RequestTimeoutException: AWS Error Code: RequestTimeout, Status Code:
400, AWS Request ID: 913367E51F2BC5AD, AWS Error Type: client, AWS
Error Message: Your socket connection to the server was not read from
or written to within the timeout period. Idle connections will be
closed.
Or the problem is in my code or PHP?

It seems like your network connection may not fast enough to upload a file that size in one shot. RequestTimeTooSkewedException errors can happen due to out of sync clocks as helloV mentioned. However, if the upload takes too long, the time that the request is signed and the time that the request is completed may be more than 15 minutes apart, which would also cause this error. I suspect this may be happening, because the second error, RequestTimeoutException, is most likely happening because your connection to S3 is too slow.
You either need a better connection or you should consider using the multipart upload API. There are helpers for this: http://docs.aws.amazon.com/aws-sdk-php/guide/latest/service-s3.html#uploading-large-files-using-multipart-uploads

As of now the size limit is 5 TB and your 200MB is well within the max size.
The problem is your local box's system clock is out of sync. Sync up with an NTP server or set it manually and the problem will go away.
For the second error, it is possible you are specifying the file size that is greater than the actual size. If your file size is 200MB, it is possible you are passing a value greater than 200MB in the API.

It is giving this error?
<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the requ
est time and the current time is too large.</Message><RequestTime>Wed, 08 Apr 20
15 11:50:58 GMT</RequestTime><ServerTime>2015-04-07T11:51:03Z</ServerTime><MaxAl
lowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds><RequestId>255BFB2CF336
1F0D</RequestId><HostId>DlpwnpVVH9aQku8qD8sAaO6tFCBlrfr9P2Sl3jbBd7FKXbzsJ1SJAwqR
OsNf+2qdSSKElK3mbus=</HostId></Error>
In error you can see requesttime and servier time is different, It must be same. So please check your local pc time and date.

Related

Apache resetting connection (?) on large file uploads [duplicate]

This question already has answers here:
net::ERR_CONNECTION_RESET when large file takes longer than a minute
(7 answers)
Closed 3 years ago.
I have a site that used to be able to upload large files (large being > 10 or 20mb) but no longer can. I've been debugging this for hours at this point.
All php values are set ludicrously high:
post_max_size = 512M
upload_max_filesize = 512M
memory_limit = 1024M
max_execution_time = 600
max_input_time = 600
I've also set TimeOut 600 in httpd.conf.
Essentially, if I add a large file to an upload field, it never uploads. I can witness the "Uploading (1%)..." in the lower left in chrome showing the file start uploading. It will count up, sometimes even reaching 100%, then start over again at 0 and start counting up again, eventually failing with an ERR_CONNECTION_RESET message.
The eventual failure seems to happen after a random amount of time, sometimes 24 seconds, simetimes 3 minutes.
I tried a 170mb file and it will always get to 16 or 17% before it restarts. That always takes something like 22 seconds. Then, it will restart at 0 and count up to 16 or 17% again, then restart again. It ultimately fails with the ERR_CONNECTION_RESET message sometimes after restarting once, sometimes after restarting 4 or 5 times.
I also tried a 30mb file. This one will always reach right around 100% before restarting.
df -h shows plenty of file space remaining, and I was able to upload files fine via SFTP confirming that there is indeed sufficient hard disk space.
Files also upload fine using the exact same application on my development server, so I can rule out any application issues.
Smaller files also upload fine on the production server, i've tried files as large as 3 or 5mb with no issue.
I'm able to execute code like:
echo "start";
sleep(60);
echo "stop";
without any hiccup on production, so it isn't timing out all requests, only the uploads.
I've tried multiple browsers, and this is happening from multiple client locations.
There is never an error in any log I can find in /var/log/httpd.
I'm not running mod security. Nowhere in my application are any of the php settings overwritten. It's a pretty standard installation of apache and php.
The production server is Amazon Linux running Apache/2.4.39 and I've tried it on php 7.1 and php 7.2 and got the same result, both using mod_php.
I am well into the "banging head against wall" stage of this issue. Does anyone have any ideas what I can do to debug this?
Finally got this to work. Thanks to net::ERR_CONNECTION_RESET when large file takes longer than a minute
I had to add RequestReadTimeout header=0 body=0 to my httpd.conf file
It couldn't be within a vhost definition, at least I tried that hours ago and nothing. But, circled back and tried it again in httpd.conf and it worked.
TG.

Exception Received Retry Command Error:Unexpected response ():

we have a cron that runs a PHP script that processes xml files including processing images. (pulling them from a web address, resizing them and then uploading to CloudFiles.
we are finding that after 220 or so images that we get an error: Exception Received Retry Command Error:Unexpected response ():
we have coded the script to try 5 times to upload it (unfortunately it still fails) and then is to go to the NEXT IMAGE
Unfortunately it fails on the next image and then so on.
The container we are uploading to is not full, we only do 1 image at a time so below the 100/sec restrictions. Files are not large example: http://images.realestateview.com.au/pics/543/10157543ao.jpg" format="jpg"/>
We tried to then run the script again via our server with the image that failed and it worked successfully along with other images.
No idea why this is happening, RackSpace advise it is a issue with the script or the cron. But we are not convinced.
Happy to post script if it helps.
Are you doing 5 retries with any backoff time or just as fast as possible? If not currently, add exponential backoff to the retry attempts.

Uploading Images with PHP Script Failing

I'm fairly new to PHP, but I'm having a recurring issue via multiple different scripts and servers when uploading images via ShareX to my server with a custom script, specifically this one.
I've migrated servers (I was on a shared host, now I'm on a VPS), and have since changed to using this script, but I'm still having the issue and I don't know what exactly the problem is.
The issue (does not occur 100% of the time, but it does most of the time; sometimes it works after retrying) is that uploading images over a certain size, about 250-500KB times out or fails most of the time. After 60 seconds, I get a 502 error (Bad Gateway) on ShareX.
I've looked up common solutions to similar problems ("large" files timing out in PHP), and have checked the following variables in my PHP.ini file.
max_execution_time = 60
max_input_time = 60
memory_limit = 128M
post_max_size = 8M
When uploads are successful, it takes a few seconds in total to upload and get the link of the uploaded image returned, but when it fails, it's always 60 seconds and then error. There is no middle ground, it's either it succeeds instantly or times out after 60 seconds.
I don't know exactly how to go about finding what exactly the error (if any) is. When it happens, ShareX reports a (502) Bad Gateway error, the 'Response:' is just the source code of the page (the script is set up to redirect you to this page if it detects you aren't uploading anything or it fails), and the 'Stack Trace' is the following:
StackTrace:
at System.Net.HttpWebRequest.GetResponse()
at ShareX.UploadersLib.Uploader.UploadData(Stream dataStream, String url, String fileName, String fileFormName, Dictionary`2 arguments, NameValueCollection headers, CookieCollection cookies, ResponseType responseType, HttpMethod method, String requestContentType, String metadata)
Edit: My server is behind cloudflare, and I read that cloudflare might cause problems. However, I've checked the settings and the maximum upload size is set at 100MB on cloudflare, and pausing it doesn't seem to help.
Edit: I removed the limit on post_max_size which was 8M and it seems to have partly fixed the issue. I can now upload things up to about 3MB but after that it always fails with a custom error message from the script.
When increasing file POST limits, you may need to change at least 2 settings:
upload_max_filesize = 30M
post_max_size = 32M
Dont think it has anything to do with CloudFlare. See if you can check the error log for Apache if the above settings dont work.

PHP script on Amazon EC2 giving response 324 on browser

We have a script which downloads acsv file. When we run this script on command line on EC2 console it runs fine; downloads the file and sends success message to the user.
But if we run through a browser then we get:
error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
When we checked in backed for the file download, it's there but the success message sent after the download is not received by the browser.
We are using cURL to download from a remote location with authentication. The user group and ownership of the folder is "ec2-user", the folder has full rights ie 777.
To summarize: the file is downloaded but at the browser end we are not getting any data or success message which we print.
P.S.: The problem occurs when the downloaded file size is 8-9MB; if it is a smaller file size say 1MB it works. So Either script executing time or download file size or some ec2 instance config is blocking it from giving browser a response. The same script is working perfectly fine on our Godaddy Linux VPS. We have already changed Max execution time for the script.
Sadly, this is a known problem without a good solution. There's a very long thread on the amazon forum here: https://forums.aws.amazon.com/thread.jspa?threadID=33427. The solution offered there is to send a keep-alive message to keep the connection from dying after 60 seconds. Not a great solution, but I don't think there's a better one unless Amazon fixes the problem, which doesn't seem likely given that the thread has been open for 3 years.

RequestTimeout uploading to S3 using PHP

I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.
We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.
We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:
$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
$remote_name, S3::ACL_PRIVATE);
I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.
I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:
include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');
$count = 0;
while (!feof($local))
{
$result = fwrite($remote, fread($local, (1024 * 1024)));
if ($result === false)
{
fwrite(STDOUT, $count++.': Unable to write!'."\n");
}
else
{
fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
}
}
fclose($local);
fclose($remote);
This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".
My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.
I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.
Update - 06/23/2011
I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.
This problem exists because you are trying to upload the same file again. Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');
To fix it, just copy the file and give it new name then upload it normally.
Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');
I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()
I have experienced this exact same issue several times.
I have many scripts right now which are uploading files to S3 constantly.
The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.
Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.
Hope this helps!
There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.
Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:
s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes
- this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.
This saves BUNCH of our time + no headache of writing a code for transferring the files.
Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.
s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.
[UPDATE]
Found one more CLI tool - s3sync
s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.
I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.
Hope it helps.
You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.
http://aws.amazon.com/sdkforphp/
The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.

Categories