I have a small program that runs and generates a new text dump every 30sec-1min via cron.
program > dump.txt
However I have another PHP web program that accesses the text dump in a read-only mode whenever someone visits the webpage. The problem is that I believe if someone accesses the website the very second the cron job is running the webpage may only read half the file because i think Linux does not lock the file when > is used.
I was thinking of doing:
echo "###START###" > dump.txt
program >> dump.txt
echo "###END###" >> dump.txt
Then when the PHP webpage reads the dump in memory, I could do a regex to check if the start and end flags are present and if not, then to try again until it reads the file with both flags.
Will this ensure the files integrity? If not, how can I ensure that when I read dump.txt it will be intact.
Rather than create the file in the directory, why not create it somewhere else? Then after the data's been written, just move it into the webroot and overwrite the previous set.
Example:
sh create_some_data.sh > /home/cronuser/my_data.html
mv /home/cronuser/my_data.html /var/www/
Related
I have a self-hosted local website (on W10) with a little chat application. The chat history is saved to a log.html file, and i want to clear it out with a batch script.
I know that on the Ubuntu Shell, it is as simple as > log.html but on Windows, that doesn't work.
I also found nul > log.html, but it says access denied
I also don't want to use a powershell script as i have to change executing rules and it takes nearly a minute.So, my question is:
Is there a way that i can empty log.html with a batch script that doesn't stay open for longer than 20 seconds?
Or, I don't mind if there is a way to use something php-related to clear it daily. I'm using IIS on Windows 10v1803 if that helps.
I think what you want is:
TYPE NUL > log.html
…or as possible alternatives:
BREAK>log.html
CLS 2>log.html
CD.>log.html
Technically they're not emptying the file they're writing a new file which overwrites the existing one.
This will delete the file and re-create it, and instantly close, so pretty much what you're wanting. Replace "Desktop" with the file path to the file, and place this .bat in the same folder as your log.html:
#echo off
cd "Desktop"
del "log.html"
echo. 2>log.html
I'm writing a document managment system. One of the features that has been requested is that users be able to cause the system to copy certain files to a location that both the users and php have access to. From here the user will be able to directly edit the files. What needs to happen is that the edited file must be automatically saved to the document management system as a new version. My question is with regards to this last point.
I have a page that allows the user to view all versions of a specific file. I was thinking that what would be cool would be to have things that when this page is accessed by anyone then php checks if there is an associated file that was edited and is now closed and simply move it to the right place and give it version info. Would this be possible? For example if a user is editing a file using MS Word, would php be able to know if that file is in use? If yes, how?
Another alternative is to just grab all files that were edited periodically (during the witching hour, for example) and also have a handy 'synchronise' button that users can click to signal that they are done.
here's some stuff I've tried:
flock: i thought it mich return false for files that are in use. mistaken
fstat: doesn't return anything useful as far as I can tell
unlink: I thought I might make a copy of the file then try unlink the original(user edited one). it turns out unlink works on stuff that is open
any ideas?
Also, it needs to work on windows and linux...
Here's some clarification for them what need: if andrew were to click the 'edit' button corresponding to a word file then the word file would be copied to some location. Andrew would then edit it using MS word, save his changes (possible more than once) and close it. That is all I want Andrew to do. I want my code to see that the file is closed then do stuff with it
You can create a file "filename.lock" (or "filename.lck") for each file open.
And you delete the file "filename.lock" (or "filename.lck") when is unlock.
And you can check if file is locked when the file "filename.lock" (or "filename.lck") exists.
If you're running under unix OS, you can implement a strategy like that:
Write a bash script like this lsof | grep /absolute/path/to/file.txt
You can also parameterize that
Recall that script from php
<?php
$result=shell_exec("myScriptPath");
echo($result);
?>
Remember that bash script will return status 0 if no one has file open, 256 (1) otherwise
I have an interesting situation where I have a perl watcher script (using Linux::Inotify2) watch for files to be dropped in a certain directory, then hand them off to a PHP script for processing. The watched directory and the files in it are not owned by the user the watcher script is running under, but the entire directory tree the files are being dumped in are rwxr-xr-x and the file is world readable.
Here's my delemma. The PHP script cannot open a file handle on the file passed to it when called from the perl script using system(), exec() or ``. However, the PHP script can open a file handle on the same file when the script is run manually from the command-line using the same effective user.
Anyone have any ideas why this would be the case?
Your fopen() calls probably rely on relative paths that break when the working directory change.
A Perl script (which I do not control) appends lines to the end of a text file periodically.
I need my PHP script (which will run as a cron job) to read the lines from that file, process them, and then remove them from the file. But, it seems like the only way to remove a line from a file with PHP is to read the file into a variable, remove the one line, truncate the file, and then rewrite the file.
But what happens if:
PHP reads the file
The Perl Script appends a new line.
The PHP script writes the modified buffer back over the file.
In that case the new line would be lost because it would be overwritten when the PHP script finishes and updates the file.
Is there a way to lock a file using PHP in a way that Perl will respect? It looks like the flock() function is PHP specific.
Do you have any freedom to change the design? Is removing the processed lines from the file an essential part of your processing?
If you have that freedom how about letting the perl-produced file grow. Presumably the authors of the perl script have some kind of housekeeping in mind already? Maintaining your own "log" of what you have processed. Then when your script starts up it reads the perl file upto the point recorded in your "log". Process a record, update the log.
If the Perl script, which you cannot control, already implements file locking via flock, you are fine. If it doesn't (and I'm afraid that we have to assume that), you are out of luck.
Another possibility would be to instead of having the perl script write to a file, let it write to a named pipe and have your php script read out directly on the other end and let it write to a real file.
Maybe you instead of working on the same file could let your php script work on a copy? I imagine it could work with three files:
File written to by perl script
A copy of file 1
A processed version of file 2
Then when your php script starts, it checks if file 1 is newer than file 2, and if so makes a new copy, processes this (possibly skipping the number of lines already processed previously) and writes this to file 3.
I've got a UNIX question... I have a PHP script that I run in the terminal that parses a bunch of images and puts them in the database. This script has already been written
Due to restrictions on my hosting (Bluehost) I cannot run a process for more than 5ish minutes. I'm wondering how to set it up so that I will quit out of the script every 5 minutes then maybe sleep for a bit, then reopen itself where it left off.
I was planning on using a cronjob to rerun the command but how do I resume where I left off?
Thanks,
Matt Mueller
Some ways I can see to do it (that spring immediately to mind - there may well be others):
As you add the images to your database, move them to a different location on the disk. Then your script won't see them the next time it runs.
As you add the images to your database, rename them from xx.jpg to xx.jpg-processed (for example). Then have your script ignore the processed ones.
Maintain a list of files that you've already processed and don't reprocess them.
Store the file name of the actual source file in the database along with the image, and query the database to prevent re-import.
You can have a table in the database or an external file that tells you where you left off.
Change your hosting provider....
Write a script to dump all the image file names into a text file.
Set script to read the text file and remove the file name from the text file every time you finish processing that image.
Any time the process is killed it should be safe to re-start.
Try CryoPID.
CryoPID allows you to capture the state of a running process in Linux and save it to a file. This file can then be used to resume the process later on, either after a reboot or even on another machine.