We have a script which downloads acsv file. When we run this script on command line on EC2 console it runs fine; downloads the file and sends success message to the user.
But if we run through a browser then we get:
error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
When we checked in backed for the file download, it's there but the success message sent after the download is not received by the browser.
We are using cURL to download from a remote location with authentication. The user group and ownership of the folder is "ec2-user", the folder has full rights ie 777.
To summarize: the file is downloaded but at the browser end we are not getting any data or success message which we print.
P.S.: The problem occurs when the downloaded file size is 8-9MB; if it is a smaller file size say 1MB it works. So Either script executing time or download file size or some ec2 instance config is blocking it from giving browser a response. The same script is working perfectly fine on our Godaddy Linux VPS. We have already changed Max execution time for the script.
Sadly, this is a known problem without a good solution. There's a very long thread on the amazon forum here: https://forums.aws.amazon.com/thread.jspa?threadID=33427. The solution offered there is to send a keep-alive message to keep the connection from dying after 60 seconds. Not a great solution, but I don't think there's a better one unless Amazon fixes the problem, which doesn't seem likely given that the thread has been open for 3 years.
Related
I am running Windows Server and it hosts my PHP files.
I am using "file_get_contents()" to call another PHP script and return the results. (I have also tried cURL with the same result)
This works fine. However if I execute my script, then re execute it almost straight away, I get an error:
"Warning: file_get_contents(http://...x.php): failed to open stream: HTTP request failed!"
So this works fine if I leave a minute or two between calling this PHP file via the browser. But after a successful attempt, if I retry too quickly, then it fails. I have even changed the URL in the line "$html = file_get_contents($url, false, $context);" to an empty file that simply prints out a line, and the HTTP stream still doesn't open.
What could be preventing me to open a new HTTP stream?
I suspect my server is blocking further outgoing streams but cannot find out where this would be configured in IIS.
Any help on this problem would be much appreciated.
**EDIT: ** In the script, I am calling a Java file that takes around 1.5 mins, and it is after this that I then call the PHP script that fails.
Also, when it fails, the page hangs for quite some time. During this time, if I open another connection to the initial PHP page then the previous page (still hanging) then completes. It seems like a connection timeout somewhere.
I have set the timeout appropriately in IIS Manager and in PHP
we have a cron that runs a PHP script that processes xml files including processing images. (pulling them from a web address, resizing them and then uploading to CloudFiles.
we are finding that after 220 or so images that we get an error: Exception Received Retry Command Error:Unexpected response ():
we have coded the script to try 5 times to upload it (unfortunately it still fails) and then is to go to the NEXT IMAGE
Unfortunately it fails on the next image and then so on.
The container we are uploading to is not full, we only do 1 image at a time so below the 100/sec restrictions. Files are not large example: http://images.realestateview.com.au/pics/543/10157543ao.jpg" format="jpg"/>
We tried to then run the script again via our server with the image that failed and it worked successfully along with other images.
No idea why this is happening, RackSpace advise it is a issue with the script or the cron. But we are not convinced.
Happy to post script if it helps.
Are you doing 5 retries with any backoff time or just as fast as possible? If not currently, add exponential backoff to the retry attempts.
I have a PHP script that downloads files with direct link from a remote server that I own. Sometimes large files (~500-600 MB) and sometimes small files (~50-100 MB).
Some code from the script:
$links[0]="file_1";
$links[0]="file_2";
$links[0]="file_3";
for($i=0;$i<count($links);$i++){
$file_link=download_file($links[$i]); //this function downloads the file with curl and returns the path to the downloaded file in local server
echo "Download complete";
rename($file_link,"some other_path/..."); //this moves the downloaded file to some other location
echo "Downloaded file moved";
echo "Download complete";
}
My problem is if I download large file and run the script from web browser, it takes upto 5-10 minutes to complete and the script echos upto "Download complete" then it dies completely. I always find that the file that was being downloaded before the script dies is 100% downloaded.
On the other hand if I download small files like 50-100MB from web browser or run the script from command shell this problem does not occur at all and the script completes fully.
I am using my own VPS for this and do not have any time limit in the server. There is no fatal error or memory overload problem.
I also used ssh2_sftp to copy files from the remote server. But same problem when I run from web browser. It always downloads the file, executes the next line and then dies! Very strange!
What should I do to get over this problem?
To make sure you can download larger files, you will have to make sure that there is:
enough memory available for php
the maximum execution time limit is set high enough.
Judging from what you said about ssh2_sftp (i assume you are running it via php) your problem is the 2nd one. Check your error(-logs) to find if that truly is your error. If so you simply increase the maximum execution time in your settings/php.ini and that should fix it.
Note: I would encourage you not to let PHP handle these large files. Call some program (via system() or exec()) that will do the download for you as PHP still has garbage collection issues.
I am having trouble uploading files to S3 from on one of our servers. We use S3 to store our backups and all of our servers are running Ubuntu 8.04 with PHP 5.2.4 and libcurl 7.18.0. Whenever I try to upload a file Amazon returns a RequestTimeout error. I know there is a bug in our current version of libcurl preventing uploads of over 200MB. For that reason we split our backups into smaller files.
We have servers hosted on Amazon's EC2 and servers hosted on customer's "private clouds" (a VMWare ESX box behind their company firewall). The specific server that I am having trouble with is hosted on a customer's private cloud.
We use the Amazon S3 PHP Class from http://undesigned.org.za/2007/10/22/amazon-s3-php-class. I have tried 200MB, 100MB and 50MB files, all with the same results. We use the following to upload the files:
$s3 = new S3($access_key, $secret_key, false);
$success = $s3->putObjectFile($local_path, $bucket_name,
$remote_name, S3::ACL_PRIVATE);
I have tried setting curl_setopt($curl, CURLOPT_NOPROGRESS, false); to view the progress bar while it uploads the file. The first time I ran it with this option set it worked. However, every subsequent time it has failed. It seems to upload the file at around 3Mb/s for 5-10 seconds then drops to 0. After 20 seconds sitting at 0, Amazon returns the "RequestTimeout - Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed." error.
I have tried updating the S3 class to the latest version from GitHub but it made no difference. I also found the Amazon S3 Stream Wrapper class and gave that a try using the following code:
include 'gs3.php';
define('S3_KEY', 'ACCESSKEYGOESHERE');
define('S3_PRIVATE','SECRETKEYGOESHERE');
$local = fopen('/path/to/backup_id.tar.gz.0000', 'r');
$remote = fopen('s3://bucket-name/customer/backup_id.tar.gz.0000', 'w+r');
$count = 0;
while (!feof($local))
{
$result = fwrite($remote, fread($local, (1024 * 1024)));
if ($result === false)
{
fwrite(STDOUT, $count++.': Unable to write!'."\n");
}
else
{
fwrite(STDOUT, $count++.': Wrote '.$result.' bytes'."\n");
}
}
fclose($local);
fclose($remote);
This code reads the file one MB at a time in order to stream it to S3. For a 50MB file, I get "1: Wrote 1048576 bytes" 49 times (the first number changes each time of course) but on the last iteration of the loop I get an error that says "Notice: fputs(): send of 8192 bytes failed with errno=11 Resource temporarily unavailable in /path/to/http.php on line 230".
My first thought was that this is a networking issue. We called up the customer and explained the issue and asked them to take a look at their firewall to see if they were dropping anything. According to their network administrator the traffic is flowing just fine.
I am at a loss as to what I can do next. I have been running the backups manually and using SCP to transfer them to another machine and upload them. This is obviously not ideal and any help would be greatly appreciated.
Update - 06/23/2011
I have tried many of the options below but they all provided the same result. I have found that even trying to scp a file from the server in question to another server stalls immediately and eventually times out. However, I can use scp to download that same file from another machine. This makes me even more convinced that this is a networking issue on the clients end, any further suggestions would be greatly appreciated.
This problem exists because you are trying to upload the same file again. Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
$s3->putObjectFile('file.jpg','bucket-name','newname-file.jpg');
To fix it, just copy the file and give it new name then upload it normally.
Example:
$s3 = new S3('XXX','YYYY', false);
$s3->putObjectFile('file.jpg','bucket-name','file.jpg');
now rename file.jpg to newname-file.jpg
$s3->putObjectFile('newname-file.jpg','bucket-name','newname-file.jpg');
I solved this problem in another way. My bug was, that filesize() function returns invalid cached size value. So just use clearstatcache()
I have experienced this exact same issue several times.
I have many scripts right now which are uploading files to S3 constantly.
The best solution that I can offer is to use the Zend libraries (either the stream wrapper or direct S3 API).
http://framework.zend.com/manual/en/zend.service.amazon.s3.html
Since the latest release of Zend framework, I haven't seen any issues with timeouts. But, if you find that you are still having problems, a simple tweak will do the trick.
Simply open the file Zend/Http/Client.php and modify the 'timeout' value in the $config array. At the time of writing this it existed on line 114. Before the latest release I was running at 120 seconds, but now things are running smooth with a 10 second timeout.
Hope this helps!
There are quite a bit of solutions available. I had this exact problem but I don't wanted to write a code and figure out the problem.
Initially I was searching for a possibility to mount S3 bucket in the Linux machine, found something interesting:
s3fs - http://code.google.com/p/s3fs/wiki/InstallationNotes
- this did work for me. It uses FUSE file-system + rsync to sync the files in S3. It kepes a copy of all filenames in the local system & make it look like a FILE/FOLDER.
This saves BUNCH of our time + no headache of writing a code for transferring the files.
Now, when I was trying to see if there is other options, I found a ruby script which works in CLI, can help you manage S3 account.
s3cmd - http://s3tools.org/s3cmd - this looks pretty clear.
[UPDATE]
Found one more CLI tool - s3sync
s3sync - https://forums.aws.amazon.com/thread.jspa?threadID=11975&start=0&tstart=0 - found in the Amazon AWS community.
I don't see both of them different, if you are not worried about the disk-space then I would choose a s3fs than a s3cmd. A disk makes you feel more comfortable + you can see the files in the disk.
Hope it helps.
You should take a look at the AWS PHP SDK. This is the AWS PHP library formerly known as tarzan and cloudfusion.
http://aws.amazon.com/sdkforphp/
The S3 class included with this is rock solid. We use it to upload multi GB files all of the time.
A very strange thing is happening. I am running a script on a new server (it works on my current server and laptop).
The strange thing is that I only get it to (sort of) work when I increase memory limit to 1024M (!). It is extracting a large zip file and going through the files, so I thought it was normal. Instead of this script terminating or ending with errors. I get an error from my browser:
The server at www.localhost.com is
taking too long to respond.
Localhost.com? The web server is just localhost:9090 and I can see Apache is still running. Maybe Apache crashes momentarily and it can't find the server? But nothing about apache crashing in the log files.
This isn't a server issue, its more to do with my PHP script and memory usage I think, so no need to move to server fault.
What could be the problem? How can I narrow do the cause, I am at loss here!
The server is a windows server running Apache 2.2 with PHP version 5.3.2. May laptop and the other working server are running version 5.3.0 and 5.3.1 for PHP.
Thanks all for any help
Ensure that,
ini_set('display_errors','On');
ini_set('error_reporting',E_ALL);
ini_set('max_execution_time', 180);
ini_set('memory_limit','1024MB' );
I'd pop this in the top of the script and see what comes out. It should show you errors and the like.
The other thing, have you checked fopen and the path of the file which it's loading?
Abs said,
check files being zipped up can be zipped by PHP (permissions
especially on a Windows OS with multi
users)
I kept getting this problem too, and none of these sites really helped until I started looking at the same thing for people using Internet Explorer. The way I fixed it is to open up the system hosts file, located at C:\Windows\System32\drivers\etc\hosts, and then uncomment out the line that mentions ::1, which is needed for IPv6. After that it worked fine.
Somehow your system's munged up and isn't treating localhost as the local 127.0.0.1 address. Is your hosts file properly configured? This is most likely why you're getting the "too long to respond" error:
marc#panic:~$ host www.localhost.com
www.localhost.com has address 64.99.64.32
marc#panic:~$ wget www.localhost.com
--2010-08-03 22:41:05-- http://www.localhost.com/
Resolving www.localhost.com... 64.99.64.32
Connecting to www.localhost.com|64.99.64.32|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
www.localhost.com is full valid hostname as far as the DNS system is concerned.
I am not a php guru by any means but are you writing the extracted files to a temporary local storage location that is within the scope of the application? Because if you are not then I think what is happening is that the application is storing the zip file and extracted files in memory and then is attempting to read them. So if it is a large zip and/or the extracted files are large that would introduce a huge amount of overhead on top of the overhead introduced by your read and processing actions.
So if you are not already I would extracted the files and write them to disk in their own folder, dispose of the zip file at this point, and then iterate over the files in your newly created directory and perform whatever actions you need to on them.