I have written a PHP script that uploads images from a directory to a remote server using ftp_put. I have set this as a cron using task scheduler and wget. Initially it works great, but then after a while, do not know exactly when, the process freezes, by that "windows task scheduler" saying it is running the job, but no photos are no longer being uploaded.
Initially I thought the problem was due the max_execution_time , but I have set that to 24 hours by using set_time_limit(3600*24); and I have set max_input_time to 600 seconds (10 mins).
Why doesn't it complete the task?
Here is the code:
if($conn){
if (is_dir($imagesPath)){
if($files = opendir($imagesPath)){
while(($file = readdir($files)) !== false){
if($file != "." && $file != ".." && preg_match("/\.jpg|\.JPG|\.gif|\.GIF|\.png|\.PNG|\.bmp|\.BMP/",$file) && date("Ymd",filemtime($imagesPath.'/'.$file)) >= date("Ymd",strtotime(date("Y-m-d")." -".$days." day"))){
if(ftp_put($conn, $remotePath.$file, $imagesPath.'/'.$file, FTP_BINARY)){
//echo $file;
$counter++;
}
else{
echo '<br>'.$imagesPath.'/'.$file;
}
}
}
closedir($files);
echo $counter.' Files Uploaded on '.date("Y-m-d");
}
else{
echo 'Unable to read '.$imagesPath;
}
}
else{
echo $imagesPath.' Does not exist';
}
ftp_close($conn);
}else{
echo "Failed to connect";
}
/* End */
exit;
Added:
/* Settings */
// Set Max Execution time
set_time_limit(3600*24);
at the top of the script.
Thanks.
I am working on something similar and I found the following to help:
1) make sure every step is conditional so if something fails to load (like an image, ftp connection etc.) the procedure keeps running (iterating).
2) set a small sleep() at the end of the file (or between difficult steps). I am using it for images as well and when the connection to the image lags or the 'image write' time lags it helps.
3) set the scheduler to execute the script often (my is 2 times a day) but it could be set to hourly as long as you check the box: do not start new instance if the script is already running
4) depending on the setup of your server, check for other tasks which may interrupt the script run-time. It is not always PHP issue. As long as you have properly scheduled task to re-execute the script you should be fine.
5) for easy debugging, instead of the echo statement (which is most likely showing in your cmd window) use a simple file logging like ( $message = fopen($myLogFile, 'w'); ) additionally to your "Does not exist" or "Failed to connect" statements and include more details so when it fails, you can go to your log file and see when and why it failed.
6) you may try to use an infinite loop like ( while (true) { ... your code... }) instead of the set_time_limit().
I hope this helps :)
Related
I have one php process locking a file with flock.
I then start another process that deletes the file.
On windows, I get the correct behaviour where the locked file is not able to be deleted, however on ubuntu (both wsl and a full Ubuntu installation) I am always able to delete the file.
I have read other similar questions but they all seem to tested wrong.
I am pretty sure I am testing this correctly.
UPDATED: More thorough testing.
After reading Why is unlink successful on an open file? and the delete Queue in the file system I needed to also test read and write.
Testing method.
Open a terminal and run my test file:
php test/flock_file_locking_with_splfileobject.php 👈 locks the file and waits for 30 seconds
Open a second terminal and run the following:
php test/flock_file_locking_with_splfileobject.php read
php test/flock_file_locking_with_splfileobject.php write
php test/flock_file_locking_with_splfileobject.php delete
Here's the contents of test/flock_file_locking_with_splfileobject.php
See comments in the code describing the output I get.
<?php
$me = array_shift($argv);
$command = array_shift($argv);
$fileName = 'cats';
switch ($command) {
case 'delete':
//attempt to delete the file
if (!unlink($fileName)) {
echo "Failed to delete file!";
exit(1);
}
echo "File was deleted:";
var_dump(file_exists($fileName) === false);
//on linux I get File was deleted:bool(true)
//on windows I get 'PHP Warning: unlink(cats): Resource temporarily unavailable'
break;
case 'write':
$written = file_put_contents($fileName, 'some datah!');
echo "Written bytes:";
var_dump($written);
//on Linux I get 'Read: string(21) "file should be locked"'
// OR 'Read: string(11) "some datah!"' if I run the write command first.
//on windows i get Warning: file_put_contents(): Only 0 of 11 bytes written, possibly out of free disk space
break;
case 'read':
$read = file_get_contents($fileName);
echo "Read: ";
var_dump($read);
// on Linux i get 'Written bytes:int(11)'
// on windows I get no error but I get no data 'Read: string(0) ""'
break;
default:
$file = new \SplFileObject($fileName, 'a+');//file gets created by a+
if (! $file->flock(LOCK_EX)) {
echo "Unable to lock the file.\n";
exit(1);
}
$file->fwrite('file should be locked');
echo "File should now be locked, try running the delete/write/read commands in another terminal.",
"I'll wait 30 seconds and try to write to the file...\n";
sleep(30);
if (! $file->fwrite('file is now unlocked')) {
echo "Unable to write to file.\n";
exit(1);
}
//in either system this file write succeeds regardless of whats going on.
break;
}
echo "\n";
Windows behaves as it should but on Linux,
my 2nd process can do anything it likes to the file while the first process apparently gets a successful file lock.
Obviously, can't trust flock on production if it only works on windows.
Is there any way to actually get an exclusive file lock?
Any ideas?
Following the good advice on this link:
How to keep checking for a file until it exists, then provide a link to it
The loop will never end if the file will never be created.
In a perfect system, it should not happen, but if it does how would one exit from that loop?
I have a similar case:
/* More codes above */
// writing on the file
$csvfile = $foldername.$date.$version.".csv";
$csv = fopen( $csvfile, 'w+' );
foreach ($_POST['lists'] as $pref) {
fputcsv($csv, $pref, ";");
}
// close and wait IO creation
fclose($csv);
sleep(1);
// Running the Java
$exec = shell_exec("/usr/bin/java -jar $app $csvfile");
sleep(3);
$xmlfile = preg_replace('/\\.[^.\\s]{3,4}$/', '.xml', $csvfile);
if (file_exists("$csvfile") && (file_exists("$xmlfile"))){
header("Location:index.php?msg");
exit;
}
else if (!file_exists("$csvfile")){
header("Location:index.php?msgf=".basename($csvfile)." creation failed!");
exit;
}
else if (!file_exists("$xmlfile")){
header("Location:index.php?msgf=".basename($xmlfile)." creation failed!");
exit;
}
//exit;
} // Just the end
?>
( Yes, bad idea to pass variables in the url.. I got that covered )
I use sleep(N); because I know the java takes short to create the file, same for the csv on the php.
How can I improve the check on the file, to wait the necessary time before reporting the status OK or NOT ok if the file was not created?
After reading your comments, I think "the best loop" isn't a good question to get a better answer.
The linked script just give a good approach when the script expects a file. That script will wait until the file is created or forever (but the creator ensures about the file creation).
Better than that, you could give a particular period to ensure if the file exists or not.
If after the shell_exec the java script didn't create the file (which I think is almost impossible, but is just a thought), you could use a code like above:
$cycles = 0;
while (!($isFileCreated = file_exists($filename)) && $cycles > 1000) {
$cycles++;
usleep(1);
}
if (!$isFileCreated)
{
//some action
//throw new RuntimeException("File doesn't exists");
}
//another action
The script above will wait until the file is created or until reach a particular amount of cycles (it's better to call cycles than microseconds, because I can't ensure that each cycle will be execute in one microsecond). The number of cycles can be changed if you need more time.
My environment is: Windows, MsSQL and PHP 5.4.
My scenario:
I'm doing a small shell script that creates a full backup from my wanted database to a temp folder and then moves it to a new location.
The backup goes fine and the file is created to my temp folder. Then I rename it to the 2nd folder and sometimes it goes ok, sometimes it cannot find the source file.
Of course at this point I know that I could skip the temporary location alltogether, but the actual problem with not finding the file bothers me. Why is it so random and might it also affect other file functions I've written before this one... Also i need to be able to control how and when the files move to the destination.
The base code is simple as it should be (although this is a simplified version of my actual code, since I doubt anyone would be interested in my error handling/logging conditions):
$query = "use test; backup database test to disk '//server01/temp/backups/file.bak', COMPRESSION;";
if($SQLClass->query($query)) {
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
$this->handleError("Moving {$source} to {$destination} failed.");
}
}
else {
die('backup failed');
}
What I have tried is:
I added a file_exists before it and it can't find the source file either, when rename can't.
As the file can't be found, copy() and unlink() will not work either
Tried clearstatcache()
Tried sleep(10) after the sql backup completes
None of these didn't help at all. I and google seem to be out of ideas on what to do or try next. Of course I could some shell_execing, but that wouldn't remove my worries about my earlier products.
I only noticed this problem when I tried to run the command multiple times in a row. Is there some sort of cache for filenames that clearstatcache() won't touch ? It seems to be related to some sort of ghost file phenomena, where php is late to refresh the file system contents or such.
I would appreciate any ideas on what to try next and if you read this far thank you :).
You may try calling system's copy command.
I had once problem like yours (on Linux box) when i had to copy files between two NFS shares. It just failed from time to time with no visible reasons. After i switched to cp (analog of Windows copy) problem has gone.
Surely it is not perfect, but it worked for me.
It might be cache-related, or the mysql process has not yet released the file.
mysql will dump the file into another temp file, first and finally moves it to your temp folder.
While the file is beeing moved, it might be inaccessible by other processes.
First I would try to glob() all the files inside temp dir, when the error appears. Maybe you notice, its still not finished.
Also have you tried to implemente something like 10 retry iterations, with some delay?
$notMoved = 0;
while($notMoved < 10){
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
if ($notMoved++ < 10){
sleep(20);
} else {
$this->handleError("Moving {$source} to {$destination} failed.");
break;
}
}else{
break;
}
}
To bypass the issue:
Don't dump and move
Move then dump :-)
(ofc. your backup store would be one behind then)
$source="////server01//temp//backups//file.bak";
$destination="////server02//storage//backups//file.bak";
if(!rename($source , $destination)) {
//handleError is just a class function of mine that logs and outputs errors.
$this->handleError("Moving {$source} to {$destination} failed.");
}
$query = "use test; backup database test to disk '//server01/temp/backups/file.bak', COMPRESSION;";
if($SQLClass->query($query)) {
//done :-)
}
else {
die('backup failed');
}
Try
$source = "\\server01\temp\backups\file.bak";
$destination = "\\server02\storage\backups\file.bak";
$content = file_get_content($source);
file_put_contents($destination, $content);
I have found this script
Quick and easy flood protection?
and I have turned it into a function.
Works great for the most part. From time to time I see an error:
[<a href='function.unlink'>function.unlink</a>]: No such file or directory
in line:
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
Apparently some IP files for some reason are getting deleted ?
I have tried to find the logic error, but I'm not good at all at that. Maybe somebody could help.
The function:
function ht_request_limiter() {
if (!isset($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
if (empty($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
$path = '/home/czivbaby/valuemarket.gr/ip-sec/'; // I use a function to validate a path first and return if false...
$path = $path.$_SERVER['REMOTE_ADDR'].'.txt'; // Real file path (filename = <ip>.txt)
$now = time(); // Current timestamp
if (!file_exists($path)) { // If first request or new request after 1 hour / 24 hour ban, new file with <timestamp>|<counter>
if ($handle = fopen($path, 'w+')) {
if (fwrite($handle, $now.'|0')) { chmod($path, 0700); } // Chmod to prevent access via web
fclose($handle);
}
}
else if (($content = file_get_contents($path)) !== false) { // Load existing file
$content = explode('|',$content); // Create paraset [0] -> timestamp [1] -> counter
$diff = (int)$now-(int)$content[0]; // Time difference in seconds from first request to now
if ($content[1] == 'ban') { // If [1] = ban we check if it was less than 24 hours and die if so
if ($diff>86400) { unlink($path); } // 24 hours in seconds.. if more delete ip file
else {
header("HTTP/1.1 503 Service Unavailable");
exit("Your IP is banned for 24 hours, because of too many requests.");
}
}
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
else {
$current = ((int)$content[1])+1; // Counter + 1
if ($current>200) { // We check rpm (request per minute) after 200 request to get a good ~value
$rpm = ($current/($diff/60));
if ($rpm>10) { // If there was more than 10 rpm -> ban (if you have a request all 5 secs. you will be banned after ~17 minutes)
if ($handle = fopen($path, 'w+')) {
fwrite($handle, $content[0].'|ban');
fclose($handle);
// Maybe you like to log the ip once -> die after next request
}
return;
}
}
if ($handle = fopen($path, 'w+')) { // else write counter
fwrite($handle, $content[0].'|'.$current .'');
fclose($handle);
}
}
}
}
Your server is processing two (or more) requests at the same time from the same client, and the script does not seem to handle this (completely normal) situation correctly. Web browsers download multiple objects from a server in parallel in order to speed up browsing. It's quite likely that, every now and then, a browser does two requests which then end up executing in parallel so that two copies of that script end up at the same unlink() call at roughly the same time. One succeeds in deleting the file, and the other one gives the error message.
Even if your server has a single CPU, the operating system will be happily providing multitasking by context switching between multiple PHP processes which are executing the same PHP script at the same time for the same client IP address.
The script should probably use file locking (http://php.net/manual/en/function.flock.php) to lock the file while working on it. Or simply ignore the unlink() error (by placing a # in front of the unlink), but other concurrency problems are likely to come up.
The script should:
Open the file for reading and writing using $f = fopen($filename, 'r+');
Lock the opened file using the file handle. The flock($f, LOCK_EX) call will block and wait if some other process already has a lock.
Read file contents.
Decide what to do (increment counter, refuse to service).
fseek($f, 0, SEEK_SET) to beginning of file, ftruncate($f, 0) to make it empty and rewrite the file contents if necessary or unlink() the file if necessary.
Close the file handle with fclose($f), which also releases the lock on it and lets another process continue with step 3.
The pattern is same for all programming languages.
I have a script that re-writes a file every few hours. This file is inserted into end users html, via php include.
How can I check if my script, at this exact moment, is working (e.g. re-writing) the file when it is being called to user for display? Is it even an issue, in terms of what will happen if they access the file at the same time, what are the odds and will the user just have to wait untill the script is finished its work?
Thanks in advance!
More on the subject...
Is this a way forward using file_put_contents and LOCK_EX?
when script saves its data every now and then
file_put_contents($content,"text", LOCK_EX);
and when user opens the page
if (file_exists("text")) {
function include_file() {
$file = fopen("text", "r");
if (flock($file, LOCK_EX)) {
include_file();
}
else {
echo file_get_contents("text");
}
}
} else {
echo 'no such file';
}
Could anyone advice me on the syntax, is this a proper way to call include_file() after condition and how can I limit a number of such calls?
I guess this solution is also good, except same call to include_file(), would it even work?
function include_file() {
$time = time();
$file = filectime("text");
if ($file + 1 < $time) {
echo "good to read";
} else {
echo "have to wait";
include_file();
}
}
To check if the file is currently being written, you can use filectime() function to get the actual time the file is being written.
You can get current timestamp on top of your script in a variable and whenever you need to access the file, you can compare the current timestamp with the filectime() of that file, if file creation time is latest then the scenario occured when you have to wait for that file to be written and you can log that in database or another file.
To prevent this scenario from happening, you can change the script which is writing the file so that, it first creates temporary file and once it's done you just replace (move or rename) the temporary file with original file, this action would require very less time compared to file writing and make the scenario occurrence very rare possibility.
Even if read and replace operation occurs simultaneously, the time the read script has to wait will be very less.
Depending on the size of the file, this might be an issue of concurrency. But you might solve that quite easy: before starting to write the file, you might create a kind of "lock file", i.e. if your file is named "incfile.php" you might create an "incfile.php.lock". Once you're doen with writing, you will remove this file.
On the include side, you can check for the existance of the "incfile.php.lock" and wait until it's disappeared, need some looping and sleeping in the unlikely case of a concurrent access.
Basically, you should consider another solution by just writing the data which is rendered in to that file to a database (locks etc are available) and render that in a module which then gets included in your page. Solutions like yours are hardly to maintain on the long run ...
This question is old, but I add this answer because the other answers have no code.
function write_to_file(string $fp, string $string) : bool {
$timestamp_before_fwrite = date("U");
$stream = fopen($fp, "w");
fwrite($stream, $string);
while(is_resource($stream)) {
fclose($stream);
}
$file_last_changed = filemtime($fp);
if ($file_last_changed < $timestamp_before_fwrite) {
//File not changed code
return false;
}
return true;
}
This is the function I use to write to file, it first gets the current timestamp before making changes to the file, and then I compare the timestamp to the last time the file was changed.