Hey so I'm trying to clean up my code a bit, and I just need to know: How important is the fopen function in PHP? By this I mean...well I've been told that you always need to fclose a file when you're done with it. This leads me to believe that if a file stays open too long then it gets corrupt in some way?
I don't know, but the thing is I've got an if statement that if the condition is true, it opens the file(s) and writes to it(them). Would it be just as efficient to open all the files for writing/reading at the beginning of the script, and then just include the instruction to actually write to them if the conditional is true??
And while we're on the topic...if I want to read line by line from a file I'll simply use the array = file("filename) shortcut I learned here. Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff? Can I take a file and make it an array, line by line, and by changing that array, change the file? Is there anyway to do that?
Thanks!
if a file stays open too long then it gets corrupt in some way?
I think PHP is smart enough to garbage collect your open files when you are finishing using them. I don't think the file will be corrupted if you don't close it unless you write to it unintentionally.
Would it be just as efficient to open all the files for writing/reading at the beginning of the script
I'm not sure you should worry about efficiency unless you need to. If your code is working and is maintainable, I wouldn't change where you open files.
Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff?
You can use file_put_contents(..) as a shortcut to write to files.
The number of files that a process can have open at a given time is limited by the operating system. If you open many files and never close them eventually you'll run out of your quota and you can't open any more.
On the other hand, if you open the file for writing, until you close the file you have no guarantee that what you have written is safely on the disk.
The simple explanation is: until you fclose file, you have no guarantee that what you fwrited to it, is actually there. Operating system can have this content stored in some buffer and be waiting for access to hard disk. If you finish your script without closing the file, that data can simply be lost.
Now, this doesn't actually happen in majority of cases, but if youwant to be sure, fclose
Can I take a file and make it an array, line by line, and by changing that array, change the file?
You could make your own array class (implementing ArrayAccess interface), which loads every line of the file. Then modify those offsetSet and offsetUnset methods to rewrite the file everytime you call them.
But I doubt it will be performance wise to rewrite everything when you make a change.
This leads me to believe that if a file stays open too long then it gets corrupt in some way?
No, it doesn't corrupt the file. It just uses up resources (opening or keeping a file handle open does take some time, memory and overhead) and you risk making other scripts that want to open the same file wait. The actual file handle will automatically be closed once your script ends, but it's a good idea to explicitly close it as soon as you're done with it. That goes for everything really: if you don't need it anymore, clean it up.
Would it be just as efficient to open all the files for writing/reading at the beginning of the script, and then just include the instruction to actually write to them if the conditional is true??
No, see above. Opening file handles isn't free, so don't do it unless you need to.
if I want to read line by line from a file I'll simply use the array = file("filename) shortcut I learned here
That's nice, but be aware that this reads the entire file into memory at once. For small files that hardly matters, but for larger files it means that a) your script will stop while the entire file is being read from the disk and that b) you need to have enough memory available to store the file in. PHP scripts are usually rather memory constrained, since there are typically many instances running in parallel.
Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff?
file_put_contents
Related
I'm working on an S3 upload script that creates and writes to a log file during the process. It writes using file_put_contents() with the FILE_APPEND flag.
What I'd like to be able to do, if possible, is be able to download a copy of that file during the process to see where it's at, particularly if the process seems to be taking longer than it should be.
I realize it may not be possible... but perhaps there's some clever way to get the current contents of that file by doing something in between the file_put_contents() calls in the script...?
My mailserver writes to a file every minute, this is fine and I'm happy for it to do that.
However on my WebServer, I want to check if that file is currently being written to and if it isn't, show the user a download link.
Is there any way I can do this..
For example: run a loop that will keep looking until the file is no longer being written to then, show a download link to the file?
I've read about flock() but I don't think this will help as another process / os is actually creating the file!
Your writting script/app/process should write lock file (empty file like filename.lock before it starts writting to main file, and then it shall remove when done. It's regular locking approach but the your script will just need to check if filename.lock is present or not. If it is, then file is being written to.
You can only acquire a read or write lock if no-one else is currently writing. You shouldn't have to do this.
Also, when the user downloads the file it could be the file has changed in the mean time. Are you sure you've got the right mental image of what you want?
I want to update a file while other processes may be using reading it. PHP flock() function allows exactly to do that.
However as I see the flock only takes a file handle .. that generally comes from fopen. If I want to do this efficiently, I don't want to keep the file open and write it, because file is coming over the network and the write operation may span to a few seconds (say 2-3 seconds).
So I was hoping if could write the data to temp file and then move it. In that case readers of the file will only be disturbed when I am renaming it.
Now writing data to temp will not require me to use flock. However how can I move tempfile to actual file correctly using locking.
I also wonder if I would actually need locking in the first place .. wouldn't the move operation will be very quick? Would it hurt simultaineous file reads. And I expect there will be 100s of reads but just one update, and that update will happen once every hour
Rename is atomic in POSIX systems, so you don't need flock. Readers that have already opened the file will be undisturbed. (Justification: An open file handle points to the inode, not to the directory entry. Rename changes just the directory entry.)
However, readers must close and reopen the file to get the new content. If readers keep the file open, they will be able to reread the old content.
I am looking for a solution I need to delete log files, but there might be a possibility that they are being accessed at the moment the delete call is made. By being accessed, I mean a process is either reading or writing to the file. In such cases, I need to skip the file instead of deleting it. Also my server is Linux and PHP is running on Apache.
What I am looking for is something similar to (in pseudo-code):
<?php
$path = "path_to_log_file";
$log_file = "app.log";
if(!being_accessed($log_file))
{
unlink($path.$log_file);
}
?>
Now my question is how can I define being_accessed? I know there might not be a language function do to this directly in PHP. I am thinking about using a combination of sections like last_access_time (maybe?) and flock (but this is useful only in those conditions where the file was flock-ed by the accessing application)
Any suggestions/insights welcome...
In general you will not be able to find that out without having administration rights (and i.e. be able to run tools like lsof to see if file of your is listed. But if your scripts are running on linux/unix server (which is the case for most hosters) then you do not need to bother, because filesystem will take care of this. So for example, you got 1GB file and someone is downloading this file. It is safe for you to delete the file (with unlink() or any other way) event if that downloader just started and it will not interfere his downloading, because filesystem knows that file is already open (some processes holds a handle) so it will only mark it, let say invisible for others (so once you try to list folder content you will no longer see that file, but if your file is big enough you could try to check available disk space (i.e. with df, to see it would still be occupied)) but those how kept the handle will still be able to use it. Once all processes close their handle file will be physically removed from media and disk space freed. So just unlink when needed. If you bother about warning unlink may throw (which may be a case on Windows), then just prepend your call with # mark (#unlink()) to disable any warning this call may throw in runtime
You'd simply change your code this way (if you are doing it repetitively):
<?php
$path = "path_to_log_file";
$log_file = "app.log";
#unlink($path.$log_file);
Notice the # to avoid getting an error in case the file is not deletable, and the lack of ending tag (ending tags are source of common errors and should be avoided)
I have a PHP script that moves files out of a specific folder on the server(an IBM AS400). The problem I am running into is that sometimes the script runs while the file is still in the process of being moved in to the folder.
Poor logic on my part assumed that if a file was "in use" that PHP wouldn't attempt to move it but it does which results in a corrupted file.
I thought I could do this:
$oldModifyTime = filemtime('thefile.pdf');
sleep(2);
if($oldModifyTime === filemtime('thefile.pdf'){
rename('thefile.pdf','/folder2/thefile.pdf');
}
But the filemtime functions come up with the same value even while the file is being written. I have also tried fileatime with the same results.
If I do Right Click->Properties in Windows the Modified Date and Access Date are constantly changing as the file is being written.
Any ideas how to determine if a file is finished transferring before doing anything to it?
From the PHP manual entry for filemtime():
Note: The results of this function are cached. See clearstatcache() for more details.
I would also suggest that 2 seconds is a bit short to detect whether the file transfer is complete due to network congestion, buffering, etc.
Transfer it as a temporary name or to a different folder, then rename/copy it to the correct folder after the transfer is complete.