I've just published a client website, it's primary purpose is distributing content from other sources, so it's regularly pulling in text, videos, images and audio from external feeds.
It also has an option for client to manually add content to be distributed.
Using PHP all this makes a fair bit of use of copy() to copy files from another server, move_uploaded_file() to copy manually uploaded files, and it also uses SimpleImage image manipulation class to make multiple copies, and crop etc..
Now to the problem: in amongst all of this, some temp files are not being deleted, it's locking up the server pretty quickly as when tmp is full it causes things like mysql errors and stops pages loading.
I've spent a lot of time googling which leads me to one thing: "temp files are deleted when the script is finished executing" - this is clearly not the case here.
Is there anything i can do to make sure any temporary files created by the scripts are deleted?
I've spoken to my server guy who suggested running a cron that will delete from it every 24 hours, i don't know whether this is a good solution but it's certainly not THE solution as i believe the files should be getting deleted? what could be a cause of stoping files from being deleted?
Regardless of anything else you come up with, the cron idea is still a good one, as you want to make sure that /tmp is getting cleaned up. You can have the cron job delete anything older than 24 hours, not delete everything every 24 hours, assuming this leaves enough space.
In terms of temp files deleting when the script is done. This only happens when tmpfile () is used to creat the temp file in the first place, as far as I know. So other files created in /tmp by other means (and there would be many other means) will not just go away because the script is done.
Related
I am just currently wondering how I can backup a folder which contains 8000+ images without the script timing out, the folder in all contains around 1.5gb of data, which we need to backup ourselves every so often.
I have tried the zip functionality provided in PHP, however it simply times out the request due to the huge number of files needed to be backed up, it does however work with smaller amounts of work.
I am trying to run this script through a HTTP REQUEST, would putting it through a Cronjob ignore the timeout?
Does anyone have any recommendations?
I would not use php for that.
If you are on linux I would setup a cron job and to run a program like rsync periodically.
A nice introduction about rsync.
Edit: If you do want / need to go the php way, you can also consider just copying instead of using zip. zip normally doesn't do much with images and if you have a database already, you can check your current directory against the database and just do a differential backup (just copy the new files). That way only your initial backup would take a long time.
You can post the code so we can optimize it, other than that, you should change your php.ini (configuration file) and remove/increase the timeout (the longest time your script can run on your server)
I have a site with thousands of posts. Each post got 0-4 pictures. The pictures are in a folder system. Now I need to move all the pictures into another folder system with different hierarchy logic.
I could easily write a PHP code for this, but I guess that I can't succesfully run a file with that amount of to do. So how can I run a php file like this long enough, or what are the other possibilities?
You can write it in PHP because PHP - on the command-line - has no execution time limit. It will run endlessly if the script takes that long.
Contact your hoster for shell access.
I'm practicing some file upload with PHP and I was uploading a file numerous times, so I wanted to make sure I wasn't taking up a lot of space on the server. I had a while loop go over every file in the php tmp directory, and there were 103,988 entries.
Is this more than normal? I had assumed the tmp directory was for files that were automatically deleted after a certain amount of time. Am I supposed to be managing this folder some how?
Part of the reason I ask is because I'm writing an app that takes a users file, changes some things, and serves it back to them. I want the file to be deleted once they leave, but I'm not sure what the best way to do it is. Should I have a folder I put all the files in and use cron to delete files older than a certain time?
General rule is that you should clean up after yourself whenever possible.
If you aren't sure that you can remove temporary files every time, it is a good idea to have a cron job doing this for you once in a while.
Files are being pushed to my server via FTP. I process them with PHP code in a Drupal module. O/S is Ubuntu and the FTP server is vsftp.
At regular intervals I will check for new files, process them with SimpleXML and move them to a "Done" folder. How do I avoid processing a partially uploaded file?
vsftp has lock_upload_files defaulted to yes. I thought of attempting to move the files first, expecting the move to fail on a currently uploading file. That doesn't seem to happen, at least on the command line. If I start uploading a large file and move, it just keeps growing in the new location. I guess the directory entry is not locked.
Should I try fopen with mode 'a' or 'r+' just to see if it succeeds before attempting to load into SimpleXML or is there a better way to do this? I guess I could just detect SimpleXML load failing but... that seems messy.
I don't have control of the sender. They won't do an upload and rename.
Thanks
Using the lock_upload_files configuration option of vsftpd leads to locking files with the fcntl() function. This places advisory lock(s) on uploaded file(s) which are in progress. Other programs don't need to consider advisory locks, and mv for example does not. Advisory locks are in general just an advice for programs that care about such locks.
You need another command line tool like lockrun which respects advisory locks.
Note: lockrun must be compiled with the WAIT_AND_LOCK(fd) macro to use the lockf() and not the flock() function in order to work with locks that are set by fcntl() under Linux. So when lockrun is compiled with using lockf() then it will cooperate with the locks set by vsftpd.
With such features (lockrun, mv, lock_upload_files) you can build a shell script or similar that moves files one by one, checking if the file is locked beforehand and holding an advisory lock on it as long as the file is moved. If the file is locked by vsftpd then lockrun can skip the call to mv so that running uploads are skipped.
If locking doesn't work, I don't know of a solution as clean/simple as you'd like. You could make an educated guess by not processing files whose last modified time (which you can get with filemtime()) is within the past x minutes.
If you want a higher degree of confidence than that, you could check and store each file's size (using filesize()) in a simple database, and every x minutes check new size against its old size. If the size hasn't changed in x minutes, you can assume nothing more is being sent.
The lsof linux command lists opened files on your system. I suggest executing it with shell_exec() from PHP and parsing the output to see what files are still being used by your FTP server.
Picking up on the previous answer, you could copy the file over and then compare the sizes of the copied file and the original file at a fixed interval.
If the sizes match, the upload is done, delete the copy, work with the file.
If the sizes do not match, copy the file again.
repeat.
Here's another idea: create a super (but hopefully not root) FTP user that can access some or all of the upload directories. Instead of your PHP code reading uploaded files right off the disk, make it connect to the local FTP server and download files. This way vsftpd handles the locking for you (assuming you leave lock_upload_files enabled). You'll only be able to download a file once vsftp releases the exclusive/write lock (once writing is complete).
You mentioned trying flock in your comment (and how it fails). It does indeed seem painful to try to match whatever locking vsftpd is doing, but dio_fcntl might be worth a shot.
I guess you've solved your problem years ago but still.
If you use some pattern to find the files you need you can ask the party uploading the file to use different name and rename the file once the upload has completed.
You should check the Hidden Stores in proftp, more info here:
http://www.proftpd.org/docs/directives/linked/config_ref_HiddenStores.html
I am using symlinks generated in PHP. They are generated when someone requests a download, and I want them to expire at the end of each day.
The problem is, what if someone starts downloading a symlink 1 minute before the end of the day and then I delete the symlink while they are downloading it...
My question is, to your knowledge will that individual downloading the symlink, right before I delete it, still be able to "download" the file? I am not worried about "resumable download" capability.. but will it make their download stop or break in some way?
Yes you can do this.
On UNIX-like systems (including Linux), you don't delete files. You delete filenames. If you delete a file that someone else currently has open, the filename will be gone but the data will remain on disk until the file is closed.
Even more so with symlinks: if you delete a symlink the file data is still there, and any process with the file open refers to it by a file handle, not by filename.
So as long as you delete the symlink after your script opens the file, the download will complete without any trouble.
As long as the webserver keeps the file open until the download is complete, which I would expect it to. this will work fine. On Linux you could even remove the hardlink to a file and the webserver would still be able to read from it as long as it's kept open
My experience says otherwise.
If there is an interruption (in the connection), or if the user chooses to pause the download, the browser will not always be able to resume (because the link is gone).
I've tried IE9x Chrome23.x Firefox17.x (on an Apache Shared Server)
What I did, was to put the symlinks in unique folders, and delete folders based on time. In my case, I kill the folder (and it's included symlink) after 30minutes (a reasonable time for my download files.)
(I can't take credit for this idea---saw it somewhere else)
I will say that very short interruptions or pauses can sometimes be recovered.