I want to update a file while other processes may be using reading it. PHP flock() function allows exactly to do that.
However as I see the flock only takes a file handle .. that generally comes from fopen. If I want to do this efficiently, I don't want to keep the file open and write it, because file is coming over the network and the write operation may span to a few seconds (say 2-3 seconds).
So I was hoping if could write the data to temp file and then move it. In that case readers of the file will only be disturbed when I am renaming it.
Now writing data to temp will not require me to use flock. However how can I move tempfile to actual file correctly using locking.
I also wonder if I would actually need locking in the first place .. wouldn't the move operation will be very quick? Would it hurt simultaineous file reads. And I expect there will be 100s of reads but just one update, and that update will happen once every hour
Rename is atomic in POSIX systems, so you don't need flock. Readers that have already opened the file will be undisturbed. (Justification: An open file handle points to the inode, not to the directory entry. Rename changes just the directory entry.)
However, readers must close and reopen the file to get the new content. If readers keep the file open, they will be able to reread the old content.
Related
So as described in question itself, I want to replace the file from which a zip archive is opened and then which is overwriting files with new version.
If still my question is not clear then the thing I want to do is I want to get a zip file from a server and then unzip using CLASS "ZipArchive" and then over write everyfile which is in Zip to destination location, the problem will be the php file by which this thing is happening will gonna be overwritten.
So will the php generate the error or the process will go whatever we want?
On Linux files are not usually locked (see https://unix.stackexchange.com/questions/147392/what-is-advisory-locking-on-files-that-unix-systems-typically-employs) so you can do whatever you want with that file. PHP works with that file in memory so you can overwrite it during it's execution.
But if you will run the script multiple times while the first one is in progress it might load incomplete version and then it will throw some error so it might be wise to make sure that won't happen (using locks) or try to do some more atomic approach.
Windows locks files so I assume you won't be able to extract files the same way there.
I have a cron script that compresses images. It basically iterates over folders and then compresses the files in the folder. My problem is that some images are getting processed halfway. My theory is that users are uploading a image, and before the image has finished uploading the file, the compressor tries to compress the file. Thus compressing a half-uploaded image, and resulting in half an image being displayed.
Is there a way in PHP to confirm that a file has finished uploading? So that I can only do the compression once i know the file has been fully written?
Or alternatively, is there a way to check if a file is being used by another process?
Or alternatively, would it be reliable enough to look at when the file was "written to disk" and not process it until 10 minutes has gone by?
PHP doesn't trigger your action until the files are fully uploaded, but it is possible for your cron job to start interacting with files before they're fully saved.
When saving something from $_FILES, save it to a version with a . prefix on it to tag it as incomplete. Make sure your cron job skips any such files.
Then, once the save operation is complete, rename the file without the . prefix to make it eligible for processing.
There are two ways to handle the scenario
Flags
Set flag that files before modify/write it.
Our App handles lots of files, we set flags before taking them to process once it's done we remove the flag, as it runs on cron flag is the best way to process files.
Usually, you can an extra column on the table on each file. or you can have an array where you can store all currently handling files.
filemtime()
As you mentioned you can check like if file mtime is more than 10 min that current time, then you compress them but if some other processes are using the file opened the at the same time. it causes the problem again.
So its better to go with flag. If other processes never modify the files often.
You can use flock to ensure file is not in use, see here for example. Alternatively you can check whether an image is broken or corrupted see here.
I want to be able to read an uploaded file, but I don't want the file to be saved on the server because of security concerns... Is it possible just to directly read a file into a variable?
If not, how does the Temp file thing work, how secure is it to save a temp file on the server, and when is it deleted?
You're going to have to save the upload to a file, otherwise any large file will cause an error because it'll overload available memory pretty easily.
Temp is VERY insecure, generally anybody on the system can read/write/delete your temp file.
The best way to go about this is just do a normal file upload, and in your submission script either read the file and process it or move it to a more permanent location. You can now issue a delete command to the tmp copy.
Since you may not have delete permission and/or the file could be auto deleted, it's best to issue the command this way (the # symbol suppresses any errors, because you don't really care if the file is already deleted or what not, this is a "just in case" scenario)
#unlink($filename);
So far as I'm aware it has to be stored somewhere in order to interact with it, but it's deleted as soon as your script finishes executing. See PHP: When does the temporary uploaded files get deleted?
My mailserver writes to a file every minute, this is fine and I'm happy for it to do that.
However on my WebServer, I want to check if that file is currently being written to and if it isn't, show the user a download link.
Is there any way I can do this..
For example: run a loop that will keep looking until the file is no longer being written to then, show a download link to the file?
I've read about flock() but I don't think this will help as another process / os is actually creating the file!
Your writting script/app/process should write lock file (empty file like filename.lock before it starts writting to main file, and then it shall remove when done. It's regular locking approach but the your script will just need to check if filename.lock is present or not. If it is, then file is being written to.
You can only acquire a read or write lock if no-one else is currently writing. You shouldn't have to do this.
Also, when the user downloads the file it could be the file has changed in the mean time. Are you sure you've got the right mental image of what you want?
Hey so I'm trying to clean up my code a bit, and I just need to know: How important is the fopen function in PHP? By this I mean...well I've been told that you always need to fclose a file when you're done with it. This leads me to believe that if a file stays open too long then it gets corrupt in some way?
I don't know, but the thing is I've got an if statement that if the condition is true, it opens the file(s) and writes to it(them). Would it be just as efficient to open all the files for writing/reading at the beginning of the script, and then just include the instruction to actually write to them if the conditional is true??
And while we're on the topic...if I want to read line by line from a file I'll simply use the array = file("filename) shortcut I learned here. Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff? Can I take a file and make it an array, line by line, and by changing that array, change the file? Is there anyway to do that?
Thanks!
if a file stays open too long then it gets corrupt in some way?
I think PHP is smart enough to garbage collect your open files when you are finishing using them. I don't think the file will be corrupted if you don't close it unless you write to it unintentionally.
Would it be just as efficient to open all the files for writing/reading at the beginning of the script
I'm not sure you should worry about efficiency unless you need to. If your code is working and is maintainable, I wouldn't change where you open files.
Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff?
You can use file_put_contents(..) as a shortcut to write to files.
The number of files that a process can have open at a given time is limited by the operating system. If you open many files and never close them eventually you'll run out of your quota and you can't open any more.
On the other hand, if you open the file for writing, until you close the file you have no guarantee that what you have written is safely on the disk.
The simple explanation is: until you fclose file, you have no guarantee that what you fwrited to it, is actually there. Operating system can have this content stored in some buffer and be waiting for access to hard disk. If you finish your script without closing the file, that data can simply be lost.
Now, this doesn't actually happen in majority of cases, but if youwant to be sure, fclose
Can I take a file and make it an array, line by line, and by changing that array, change the file?
You could make your own array class (implementing ArrayAccess interface), which loads every line of the file. Then modify those offsetSet and offsetUnset methods to rewrite the file everytime you call them.
But I doubt it will be performance wise to rewrite everything when you make a change.
This leads me to believe that if a file stays open too long then it gets corrupt in some way?
No, it doesn't corrupt the file. It just uses up resources (opening or keeping a file handle open does take some time, memory and overhead) and you risk making other scripts that want to open the same file wait. The actual file handle will automatically be closed once your script ends, but it's a good idea to explicitly close it as soon as you're done with it. That goes for everything really: if you don't need it anymore, clean it up.
Would it be just as efficient to open all the files for writing/reading at the beginning of the script, and then just include the instruction to actually write to them if the conditional is true??
No, see above. Opening file handles isn't free, so don't do it unless you need to.
if I want to read line by line from a file I'll simply use the array = file("filename) shortcut I learned here
That's nice, but be aware that this reads the entire file into memory at once. For small files that hardly matters, but for larger files it means that a) your script will stop while the entire file is being read from the disk and that b) you need to have enough memory available to store the file in. PHP scripts are usually rather memory constrained, since there are typically many instances running in parallel.
Is there a shortcut for writing to a file as well, without having to go through all the fopen stuff?
file_put_contents