I am reading a file line by line directly from aws s3 server using SplFileObject.
$url = "s3://{$bucketName}/{$fileName}";
$fileHandle = new \SplFileObject($url, 'r');
$fileHandle->setFlags(SplFileObject::READ_AHEAD);
while(!$fileHandle->eof()){
$line = $fileHandle->current();
// process the line
$fileHandle->next();
}
That works perfectly fine in 99% of the cases except at times when loop is running and there is a temporary network glitch. Script being unable to
access the next line from s3 for x seconds exits prematurely.
And the problem is you never know if the script completed its job or it exited due to timeout.
My question here is
1- is there a way to explicitly set timeout on SPLFileObject while accessing remote file so that when the loop exits, I can understand if it exited due to time out or if the file really reached the eof.
I checked the stream_set_blocking and stream_set_timeout but they both do not seem to work with SplFileObject.
2- What is the timeout setting that this script is currently following? is it socket_timeout ? or stream_timeout or curl timeout? or simply php script timeout (which is highly unlikely I guess as the script runs on command line ).
Any hints?
Related
I am running Windows Server and it hosts my PHP files.
I am using "file_get_contents()" to call another PHP script and return the results. (I have also tried cURL with the same result)
This works fine. However if I execute my script, then re execute it almost straight away, I get an error:
"Warning: file_get_contents(http://...x.php): failed to open stream: HTTP request failed!"
So this works fine if I leave a minute or two between calling this PHP file via the browser. But after a successful attempt, if I retry too quickly, then it fails. I have even changed the URL in the line "$html = file_get_contents($url, false, $context);" to an empty file that simply prints out a line, and the HTTP stream still doesn't open.
What could be preventing me to open a new HTTP stream?
I suspect my server is blocking further outgoing streams but cannot find out where this would be configured in IIS.
Any help on this problem would be much appreciated.
**EDIT: ** In the script, I am calling a Java file that takes around 1.5 mins, and it is after this that I then call the PHP script that fails.
Also, when it fails, the page hangs for quite some time. During this time, if I open another connection to the initial PHP page then the previous page (still hanging) then completes. It seems like a connection timeout somewhere.
I have set the timeout appropriately in IIS Manager and in PHP
in a PHP project, I need to download CSV files from a FTP server. I'm using PHP ftp_XXX function to do this.
I'm working on two separate computers, one can download FTP files with no problem; the other one initiate the FTP connection, open and create a file on my disk but after a few seconds (sounds like a timeout), the script end with this error:
PHP Warning: ftp_get(): Opening BINARY mode data connection for...
I've already tried to use passive mode, the connection is closed at the end of my script and the strange thing is that this is working on another computer, and on my server.
So here are my questions:
1) do you have any idea why this is happening?
2) are there configuration in php.ini or apache to enable properly PHP FTP?
Thanks you.
Cyril
Maybe you exceeded maximum execution time.
Try to increase it:
http://php.net/manual/en/function.set-time-limit.php
I have a PHP script that downloads files with direct link from a remote server that I own. Sometimes large files (~500-600 MB) and sometimes small files (~50-100 MB).
Some code from the script:
$links[0]="file_1";
$links[0]="file_2";
$links[0]="file_3";
for($i=0;$i<count($links);$i++){
$file_link=download_file($links[$i]); //this function downloads the file with curl and returns the path to the downloaded file in local server
echo "Download complete";
rename($file_link,"some other_path/..."); //this moves the downloaded file to some other location
echo "Downloaded file moved";
echo "Download complete";
}
My problem is if I download large file and run the script from web browser, it takes upto 5-10 minutes to complete and the script echos upto "Download complete" then it dies completely. I always find that the file that was being downloaded before the script dies is 100% downloaded.
On the other hand if I download small files like 50-100MB from web browser or run the script from command shell this problem does not occur at all and the script completes fully.
I am using my own VPS for this and do not have any time limit in the server. There is no fatal error or memory overload problem.
I also used ssh2_sftp to copy files from the remote server. But same problem when I run from web browser. It always downloads the file, executes the next line and then dies! Very strange!
What should I do to get over this problem?
To make sure you can download larger files, you will have to make sure that there is:
enough memory available for php
the maximum execution time limit is set high enough.
Judging from what you said about ssh2_sftp (i assume you are running it via php) your problem is the 2nd one. Check your error(-logs) to find if that truly is your error. If so you simply increase the maximum execution time in your settings/php.ini and that should fix it.
Note: I would encourage you not to let PHP handle these large files. Call some program (via system() or exec()) that will do the download for you as PHP still has garbage collection issues.
I need to read a large file to find some labels and create a dynamic form. I can not use file() or file_get_contents() because the file size.
If I read the file line by line with the following code
set_time_limit(0);
$handle = fopen($file, 'r');
set_time_limit(0);
if ($handle) {
while (!feof($handle)) {
$line = fgets($handle);
if ($line) {
//do something.
}
}
}
echo 'Read complete';
I get the following error in Chrome:
Error 101 (net::ERR_CONNECTION_RESET)
This error occurs after several minutes so that the constant max_input_time, I think not is the problem.(is set to 60).
What browser software do you use? Apache, nginx? You should set the max accepted file upload at somewhere higher than 500MB. Furthermore, the max upload size in the php.ini should be bigger than 500MB, too, and I think that PHP must be allowed to spawn processes larger than 500MB. (check this in your php config).
Set the memory limit ini_set("memory_limit","600M");also you need to set the time out limit
set_time_limit(0);
Generally long running processes should not be done while the users waits for them to complete.
I'd recommend using a background job oriented tool that can handle this type of work and can be queried about the status of the job (running/finished/error).
My first guess is that something in the middle breaks the connection because of a timeout. Whether it's a timeout in the web server (which PHP cannot know about) or some firewall, it doesn't really matter, PHP gets a signal to close the connection and the script stops running. You could circumvent this behaviour by using ignore-user-abort(true), this along with set_time_limit(0) should do the trick.
The caveat is that whatever caused the connection abort will still do it, though the script would still finish it's job. One very annoying side effect is that this script could possibly be executed multiple times in parallel without neither of them ever completing.
Again, I recommend using some background task to do it and an interface for the end-user (browser) to verify the status of that task. You could also implement a basic one yourself via cron jobs and database/text files that hold the status.
I am trying to run a php script via a cronjob and sometimes (about half the time) I get the following warning:
PHP Warning: file_get_contents(http://url.com): failed to open stream: HTTP request failed! in /path/myfile.php on line 285
The program continues to run after that which makes me think it is not a timeout problem or a memory issue (timeout is set to 10 minutes and memory to 128M), but the variable that I am storing the results of that function call in is empty. The weird part is that I am making several other calls to this same website with other url parameters and they never have a problem. The only difference with this function call is that the file it is downloading is about 70 mb while the others are all around 300 kb.
Also, I never get this warning if I SSH into the web server and run the php script manually, only when it is run from a cron.
I have also tried using cURL instead of file_get_contents but then I run out of memory.
Thanks, any help here would be appreciated.
Perhaps the remote server on URL.com is sometimes timing out or returning an error for that particular (large) request?
I don't think you should be trying to store 70mb in a variable.
You can configure cURL to download directly to a file. Something like:
$file = fopen ('my.file', 'w');
$c = curl_init('http://url.com/whatever');
curl_setopt($c, CURLOPT_FILE, $file);
curl_exec($c);
curl_close($c);
fclose($file);
If nothing else, curl should provide you with much better errors about what's going wrong.
From another answer .. double check that this issue isn't occurring some of the time with the URL parameters you're using:
Note: If you're opening a URI with special characters, such as spaces, you need to encode the URI with urlencode() - http://docs.php.net/file%5Fget%5Fcontents