I have the need to stream a log file that is located on FTP, its from a remote server.
I am not sure how to stream this, possibly with Ajax.
There are a few things on google, but i cannot seem to find something that can access remote FTP and stream the file.
Maybe with Ajax and using intervals, then scrolling down to the bottom of the page.
Note that the log file is being updated constantly and people will also be sending commands to the server, thus updating the log file. Will refreshing the log and downloading the log each time be slow? Some log files can be very large.
Stop using the filesystem and implement publish-subscriber pattern. For a reference implementation see loggly or papertrail.
I would think you would need some sort of intermediate script to keep track of the last read lines of the logfile and the respond to the AJAX call with any updates to the file since that point.
My psuedo-code solution would look like this
Read local cache file for last line number that was processed
Count number of lines in the file (using linux wc -l or similar)
Get the last X number of lines from the files as calculated from the difference (linux tail -n X or similar)
Update local cache file with last line number read.
Return the content to the caller.
Related
So as described in question itself, I want to replace the file from which a zip archive is opened and then which is overwriting files with new version.
If still my question is not clear then the thing I want to do is I want to get a zip file from a server and then unzip using CLASS "ZipArchive" and then over write everyfile which is in Zip to destination location, the problem will be the php file by which this thing is happening will gonna be overwritten.
So will the php generate the error or the process will go whatever we want?
On Linux files are not usually locked (see https://unix.stackexchange.com/questions/147392/what-is-advisory-locking-on-files-that-unix-systems-typically-employs) so you can do whatever you want with that file. PHP works with that file in memory so you can overwrite it during it's execution.
But if you will run the script multiple times while the first one is in progress it might load incomplete version and then it will throw some error so it might be wise to make sure that won't happen (using locks) or try to do some more atomic approach.
Windows locks files so I assume you won't be able to extract files the same way there.
I have a problem mentioned below:
There will be multiple servers like a.com, b.com, b1.com and so on.
The user will always logon to a.com and request for a file which would be present on any of the b.com, b1.com etc server. All fiels are present on servers with names starting from "b".
The application will connect to b1.com and find the file.
Now I want a way to download the file from b1.com without the user knowing that it is actually coming from b1.com. Is there a way that the file can be directly downloaded from b server to user desktop/pc? Or is there any way that while the file is being downloaded from "b" to "a", I can start the file transfer from "a" to "user"
I dont want to first downlaod the complete file from b to a and then a to user as it will double up the transfer time which will impact performance when file is large.
Any solutions in mind for this?? I am using PHP on server side. Any other solution is also welcome.
Use fsockopen or fopen to connect to your b servers, send headers to your client indicating the file type to trigger a download, then in a loop to download the file 8 KB at a time using fread, and using output buffering (ob_flush()) flush the 8 KB to the user, and iterate until the file is finished downloading.
OR
to avoid all that, you can look into using Apache's mod_proxy module.
I have a PHP script that moves files out of a specific folder on the server(an IBM AS400). The problem I am running into is that sometimes the script runs while the file is still in the process of being moved in to the folder.
Poor logic on my part assumed that if a file was "in use" that PHP wouldn't attempt to move it but it does which results in a corrupted file.
I thought I could do this:
$oldModifyTime = filemtime('thefile.pdf');
sleep(2);
if($oldModifyTime === filemtime('thefile.pdf'){
rename('thefile.pdf','/folder2/thefile.pdf');
}
But the filemtime functions come up with the same value even while the file is being written. I have also tried fileatime with the same results.
If I do Right Click->Properties in Windows the Modified Date and Access Date are constantly changing as the file is being written.
Any ideas how to determine if a file is finished transferring before doing anything to it?
From the PHP manual entry for filemtime():
Note: The results of this function are cached. See clearstatcache() for more details.
I would also suggest that 2 seconds is a bit short to detect whether the file transfer is complete due to network congestion, buffering, etc.
Transfer it as a temporary name or to a different folder, then rename/copy it to the correct folder after the transfer is complete.
I have a PHP script that opens a local directory in order to copy and process some files. But these files may be incomplete, because they are being uploaded by a slow FTP process, and I do not want to copy or process any files which have not been completely uploaded yet.
Is is possible in PHP to find out if a file is still being copied (that is, read from), or written to?
I need my script to process only those files that have been completely uploaded.
The ftp process now, upload files in parallel, and it take more than 1 second for each filesize to change, so this trick is not working for me now, any other method suggests
Do you have script control over the FTP process? If so, have the script that's doing the uploading upload a [FILENAME].complete file (blank text file) after the primary upload completes, so the processing script knows that the file is complete if there's a matching *.complete file there also.
+1 to #MidnightLightning for his excellent suggestion. If you don't have control over the process you have a couple of options:
If you know what the final size of the file should be then use filesize() to compare the current size to the known size. Keep checking until they match.
If you don't know what the final size should be it gets a little trickier. You could use filesize() to check the size of the file, wait a second or two and check it again. If the size hasn't changed then the upload should be complete. The problem with the second method is if your file upload stalls for whatever reason it could give you a false positive. So the time to wait is key.
You don't specify what kind of OS you're on, but if it's a Unix-type box, you should have fuser and/or lsof available. fuser will report on who's using a particular file, and lsof will list all open files (including sockets, fifos, .so's, etc...). Either of those could most likely be used to monitor your directory.
On the windows end, there's a few free tools from Sysinternals that do the same thing. handle might do the trick
I've got a UNIX question... I have a PHP script that I run in the terminal that parses a bunch of images and puts them in the database. This script has already been written
Due to restrictions on my hosting (Bluehost) I cannot run a process for more than 5ish minutes. I'm wondering how to set it up so that I will quit out of the script every 5 minutes then maybe sleep for a bit, then reopen itself where it left off.
I was planning on using a cronjob to rerun the command but how do I resume where I left off?
Thanks,
Matt Mueller
Some ways I can see to do it (that spring immediately to mind - there may well be others):
As you add the images to your database, move them to a different location on the disk. Then your script won't see them the next time it runs.
As you add the images to your database, rename them from xx.jpg to xx.jpg-processed (for example). Then have your script ignore the processed ones.
Maintain a list of files that you've already processed and don't reprocess them.
Store the file name of the actual source file in the database along with the image, and query the database to prevent re-import.
You can have a table in the database or an external file that tells you where you left off.
Change your hosting provider....
Write a script to dump all the image file names into a text file.
Set script to read the text file and remove the file name from the text file every time you finish processing that image.
Any time the process is killed it should be safe to re-start.
Try CryoPID.
CryoPID allows you to capture the state of a running process in Linux and save it to a file. This file can then be used to resume the process later on, either after a reboot or even on another machine.