Can I create temporary files with php that expire and get deleted after a predefined time set during file creation, and not when the script finishes execution? My host doesn't allow cron jobs, so php only, if possible.
Without access to cron you only have one option -- manage file cleanup on your own.
This is, in fact, what the PHP SESSION handler does -- it creates a file of session data. Then, when PHP starts, there is a small chance that it will go through an remove expired files. (IIRC, there is a 1/100 chance that it will.)
Your best bet is to create a directory to store your temp files in and then use a similar process.
This might give you some ideas: Session Configuration
$deletetime = time() - $days * 86400; # only do this calculation
$dir = '/path/to/dir';
if ($handle = opendir($dir)) {
while (false !== ($file = readdir($handle))) {
if ((filetype("$dir/$file") == 'file') && (filemtime($file) < $deletetime)) {
unlink("$dir/$file");
}
}
closedir($handle);
}
reference - webhostingtalk
Also here is a similar question asked before.Auto delete files after x-time
Is this what you are looking for?
<?php
$fh = fopen('test.html', 'a');
fwrite($fh, '<h1>Hello world!</h1>');
fclose($fh);
unlink('test.html');
?>
http://www.php.net/manual/en/function.unlink.php
Related
I am having issue with automatically deleting files from a specific folder on my server.
I need to run an automatic delete every 31 mins on a folder which stores incoming documents. These files will always be *.pdf format.
I have found an issue similar on this site.
How to delete files from directory based on creation date in php?
However my issue is with *.pdf files and I have never used php before, ideally I was looking for a .bat file, but if that's not possible it's no problem.
<?php
if ($handle = opendir('/path/to/files')) {
while (false !== ($file = readdir($handle))) {
$filelastmodified = filemtime($file);
if((time() - $filelastmodified) > 24*3600 && strtolower(substr($file, -4)) == ".pdf")
{
unlink($file);
}
}
closedir($handle);
}
?>
This added condition checks if the filename ends with ".pdf". You could run this as a cronjob.
You might as well use shell commands instead. find with -name, -exec and -mtime does the same and saves the need for a PHP parser.
My data.txt is generated each time when someone is visiting my site. I would like to delete this file on on specific time, let say 1:00AM daily.
I found this script but I'm struggling to update the code :/
Any advice?
<?php
$path = dirname(__FILE__).'/files';
if ($handle = opendir($path)) {
while (false !== ($file = readdir($handle))) {
if ((time()-filectime($path.'/'.$file)) < 86400) { // 86400 = 60*60*24
if (preg_match('/\.txt$/i', $file)) {
unlink($path.'/'.$file);
}
}
}
}
?>
The script you posted deletes .txt files in the given folder if they are older than a day but the problem is that this test only happens once - when you run the script.
What you need to do is run this script periodically. If you are running this on Linux you should probably add a cron job that executes this script periodically, say once an hour or once daily.
If you're running this on Windows, there is a task schedule that you could use to accomplish the same thing.
Use task scheduler, such as cron for this purposes. You can remove your file simply via shell command
rm /path/to/data.txt
So there's no need to write a PHP script for that.
I have found this script
Quick and easy flood protection?
and I have turned it into a function.
Works great for the most part. From time to time I see an error:
[<a href='function.unlink'>function.unlink</a>]: No such file or directory
in line:
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
Apparently some IP files for some reason are getting deleted ?
I have tried to find the logic error, but I'm not good at all at that. Maybe somebody could help.
The function:
function ht_request_limiter() {
if (!isset($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
if (empty($_SERVER['REMOTE_ADDR'])) { return; } // Maybe its impossible, however we check it first
$path = '/home/czivbaby/valuemarket.gr/ip-sec/'; // I use a function to validate a path first and return if false...
$path = $path.$_SERVER['REMOTE_ADDR'].'.txt'; // Real file path (filename = <ip>.txt)
$now = time(); // Current timestamp
if (!file_exists($path)) { // If first request or new request after 1 hour / 24 hour ban, new file with <timestamp>|<counter>
if ($handle = fopen($path, 'w+')) {
if (fwrite($handle, $now.'|0')) { chmod($path, 0700); } // Chmod to prevent access via web
fclose($handle);
}
}
else if (($content = file_get_contents($path)) !== false) { // Load existing file
$content = explode('|',$content); // Create paraset [0] -> timestamp [1] -> counter
$diff = (int)$now-(int)$content[0]; // Time difference in seconds from first request to now
if ($content[1] == 'ban') { // If [1] = ban we check if it was less than 24 hours and die if so
if ($diff>86400) { unlink($path); } // 24 hours in seconds.. if more delete ip file
else {
header("HTTP/1.1 503 Service Unavailable");
exit("Your IP is banned for 24 hours, because of too many requests.");
}
}
else if ($diff>3600) { unlink($path); } // If first request was more than 1 hour, new ip file
else {
$current = ((int)$content[1])+1; // Counter + 1
if ($current>200) { // We check rpm (request per minute) after 200 request to get a good ~value
$rpm = ($current/($diff/60));
if ($rpm>10) { // If there was more than 10 rpm -> ban (if you have a request all 5 secs. you will be banned after ~17 minutes)
if ($handle = fopen($path, 'w+')) {
fwrite($handle, $content[0].'|ban');
fclose($handle);
// Maybe you like to log the ip once -> die after next request
}
return;
}
}
if ($handle = fopen($path, 'w+')) { // else write counter
fwrite($handle, $content[0].'|'.$current .'');
fclose($handle);
}
}
}
}
Your server is processing two (or more) requests at the same time from the same client, and the script does not seem to handle this (completely normal) situation correctly. Web browsers download multiple objects from a server in parallel in order to speed up browsing. It's quite likely that, every now and then, a browser does two requests which then end up executing in parallel so that two copies of that script end up at the same unlink() call at roughly the same time. One succeeds in deleting the file, and the other one gives the error message.
Even if your server has a single CPU, the operating system will be happily providing multitasking by context switching between multiple PHP processes which are executing the same PHP script at the same time for the same client IP address.
The script should probably use file locking (http://php.net/manual/en/function.flock.php) to lock the file while working on it. Or simply ignore the unlink() error (by placing a # in front of the unlink), but other concurrency problems are likely to come up.
The script should:
Open the file for reading and writing using $f = fopen($filename, 'r+');
Lock the opened file using the file handle. The flock($f, LOCK_EX) call will block and wait if some other process already has a lock.
Read file contents.
Decide what to do (increment counter, refuse to service).
fseek($f, 0, SEEK_SET) to beginning of file, ftruncate($f, 0) to make it empty and rewrite the file contents if necessary or unlink() the file if necessary.
Close the file handle with fclose($f), which also releases the lock on it and lets another process continue with step 3.
The pattern is same for all programming languages.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Automatically delete files from webserver
I'm logging some XML content to a text file using file_put_contents() in case I need it for debugging.
How do I delete this file after a certain amount of time?
I've got a restrictive .htaccess in the log folder, but I'd rather not leave the info (will have customer's addresses, etc) up on the web for long.
Well, while agreeing with everyone else that you're using the wrong tool for the job, your question is pretty straightforward. So here you go:
write a PHP script that will run from the command line
use a tool like cron or windows scheduled task
invoke the cron every minute/five minutes/etc
Your script would be pretty straightforward:
<?php
$dh = opendir(PATH_TO_DIRECTORY);
while( ($file = readdir($dh)) !== false ) {
if( !preg_match('/^[.]/', $file) ) { // do some sort of filtering on files
$path = PATH_TO_DIRECTORY . DIRECTORY_SEPARATOR . $file;
if( filemtime($path) < strtotime('-1 hour') ) { // check how long it's been around
unlink($path); // remove it
}
}
}
You could also use find if you're working in Linux, but I see that #Rawkode posted that while I was writing this so I'll leave you with his elegant answer for that solution.
You should be using the PHP error_log() function which will respect the php.ini settings.
error_log('Send to PHP error log');
error_log('Sent to my custom log', 3, "/var/tmp/my-errors.log");
You can use a built-in function – filectime – to track the creation date of your log files and then delete those that are old enough to be deleted.
This code searches through the logs directory and deletes logs that are 2 weeks old.
$logs = opendir('logs');
while (($log = readdir($logs)) !== false)
{
if ($log == '.' || $log == '..')
continue;
if (filectime('logs/'.$log) <= time() - 14 * 24 * 60 * 60)
{
unlink('logs/'.$log);
}
}
closedir($logs);
You should be handling your logging better, but to answer your question I'd use the *nix find command.
find /path/to/files* -mtime +5 -delete
This will delete all files that haven't been modified in five days. Adjust to your own needs.
There are two ways you can work it out:
if you write the log to only one file, you can empty the file using something like this:
<?php file_put_contents($logpath, "");
if you generate many files, you can write cleanup function like this:
<?php
$logpath = "/tmp/path/to/log/";
$dirh = opendir($logpath);
while (($file = readdir($dh)) !== false) {
if(in_array($file, array('.', '..'))) continue;
unlink($logpath . $file);
}
closedir($dirh);
You can set up a server Scheduled task (most known as Cron Job under *nix systems) to run on a regular basis and delete the left log files. The task would be to execute a PHP code that will do the job.
See Cron Job Overvieww
I have a very simple question: what is the best way to download a file in PHP but only if a local version has been downloaded more than 5 minute ago?
In my actual case I would like to get data from a remotely hosted csv file, for which I currently use
$file = file_get_contents($url);
without any local copy or caching. What is the simplest way to convert this into a cached version, where the end result doesn't change ($file stays the same), but it uses a local copy if it’s been fetched not so long ago (say 5 minute)?
Use a local cache file, and just check the existence and modification time on the file before you use it. For example, if $cache_file is a local cache filename:
if (file_exists($cache_file) && (filemtime($cache_file) > (time() - 60 * 5 ))) {
// Cache file is less than five minutes old.
// Don't bother refreshing, just use the file as-is.
$file = file_get_contents($cache_file);
} else {
// Our cache is out-of-date, so load the data from our remote server,
// and also save it over our cache for next time.
$file = file_get_contents($url);
file_put_contents($cache_file, $file, LOCK_EX);
}
(Untested, but based on code I use at the moment.)
Either way through this code, $file ends up as the data you need, and it'll either use the cache if it's fresh, or grab the data from the remote server and refresh the cache if not.
EDIT: I understand a bit more about file locking since I wrote the above. It might be worth having a read of this answer if you're concerned about the file locking here.
If you're concerned about locking and concurrent access, I'd say the cleanest solution would be to file_put_contents to a temporary file, then rename() it over $cache_file, which should be an atomic operation, i.e. the $cache_file will either be the old contents or the full new contents, never halfway written.
Try phpFastCache , it support files caching, and you don't need to code your cache class. easy to use on shared hosting and VPS
Here is example:
<?php
// change files to memcached, wincache, xcache, apc, files, sqlite
$cache = phpFastCache("files");
$content = $cache->get($url);
if($content == null) {
$content = file_get_contents($url);
// 300 = 5 minutes
$cache->set($url, $content, 300);
}
// use ur $content here
echo $content;
Here is a simple version which also passes a windows User-Agent string to the remote host so you don't look like a trouble-maker without proper headers.
<?php
function getCacheContent($cachefile, $remotepath, $cachetime = 120){
// Generate the cache version if it doesn't exist or it's too old!
if( ! file_exists($cachefile) OR (filemtime($cachefile) < (time() - $cachetime))) {
$options = array(
'method' => "GET",
'header' => "Accept-language: en\r\n" .
"User-Agent: Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)\r\n"
);
$context = stream_context_create(array('http' => $options));
$contents = file_get_contents($remotepath, false, $context);
file_put_contents($cachefile, $contents, LOCK_EX);
return $contents;
}
return file_get_contents($cachefile);
}
If you are using a database system of any type, you could cache this file there. Create a table for cached information, and give it at minimum the following fields:
An identifier; something you can use to retrieve the file the next time you need it. Probably something like a file name.
A timestamp from the last time you downloaded the file from the URL.
Either a path to the file, where it's stored in your local file system, or use a BLOB type field to just store the contents of the file itself in the database. I would recommend just storing the path, personally. If the file was very large, you definitely wouldn't want to put it in the database.
Now, when you run the script above next time, first check in the database for the identifier, and pull the time stamp. If the difference between the current time and the stored timestamp is greater than 5 minutes pull from the URL and update the database. Otherwise, load the file from the database.
If you don't have a database setup, you could do the same thing just using files, wherein one file, or field in a file, would contain the timestamp from when you last downloaded the file.
First, you might want to check the design pattern: Lazy loading.
The implementation should change to always load the file from local cache.
If the local cache is not existed or file time jitter longer than 5 minute, you fetch the file from server.
Pseudo code is like following:
$time = filetime($local_cache)
if ($time == false || (now() - $time) > 300000)
fetch_localcache($url) #You have to do it yourself
$file = fopen($local_cache)
Best practice for it
$cacheKey=md5_file('file.php');
You can save a copy of your file on first hit, then check with filemtime the timestamp of the last modification of the local file on following hits.
You would warp it into a cache like method:
function getFile($name) {
// code stolen from #Peter M
if ($file exists) {
if ($file time stamp older than 5 minutes) {
$file = file_get_contents($url)
}
} else {
$file = file_get_contents($url)
}
return $file;
}
I think you want some (psuedo code) logic like:
if ($file exists) {
if ($file time stamp older than 5 minutes) {
$file = file_get_contents($url)
}
} else {
$file = file_get_contents($url)
}
use $file