How to clear contents of file with batch script - php

I have a self-hosted local website (on W10) with a little chat application. The chat history is saved to a log.html file, and i want to clear it out with a batch script.
I know that on the Ubuntu Shell, it is as simple as > log.html but on Windows, that doesn't work.
I also found nul > log.html, but it says access denied
I also don't want to use a powershell script as i have to change executing rules and it takes nearly a minute.So, my question is:
Is there a way that i can empty log.html with a batch script that doesn't stay open for longer than 20 seconds?
Or, I don't mind if there is a way to use something php-related to clear it daily. I'm using IIS on Windows 10v1803 if that helps.

I think what you want is:
TYPE NUL > log.html
…or as possible alternatives:
BREAK>log.html
 
CLS 2>log.html
 
CD.>log.html
Technically they're not emptying the file they're writing a new file which overwrites the existing one.

This will delete the file and re-create it, and instantly close, so pretty much what you're wanting. Replace "Desktop" with the file path to the file, and place this .bat in the same folder as your log.html:
#echo off
cd "Desktop"
del "log.html"
echo. 2>log.html

Related

php - What will happen if I overwrite the file itself when it is executing (using ZipArchive)

So as described in question itself, I want to replace the file from which a zip archive is opened and then which is overwriting files with new version.
If still my question is not clear then the thing I want to do is I want to get a zip file from a server and then unzip using CLASS "ZipArchive" and then over write everyfile which is in Zip to destination location, the problem will be the php file by which this thing is happening will gonna be overwritten.
So will the php generate the error or the process will go whatever we want?
On Linux files are not usually locked (see https://unix.stackexchange.com/questions/147392/what-is-advisory-locking-on-files-that-unix-systems-typically-employs) so you can do whatever you want with that file. PHP works with that file in memory so you can overwrite it during it's execution.
But if you will run the script multiple times while the first one is in progress it might load incomplete version and then it will throw some error so it might be wise to make sure that won't happen (using locks) or try to do some more atomic approach.
Windows locks files so I assume you won't be able to extract files the same way there.

reading a file created from a cron job

I have a small program that runs and generates a new text dump every 30sec-1min via cron.
program > dump.txt
However I have another PHP web program that accesses the text dump in a read-only mode whenever someone visits the webpage. The problem is that I believe if someone accesses the website the very second the cron job is running the webpage may only read half the file because i think Linux does not lock the file when > is used.
I was thinking of doing:
echo "###START###" > dump.txt
program >> dump.txt
echo "###END###" >> dump.txt
Then when the PHP webpage reads the dump in memory, I could do a regex to check if the start and end flags are present and if not, then to try again until it reads the file with both flags.
Will this ensure the files integrity? If not, how can I ensure that when I read dump.txt it will be intact.
Rather than create the file in the directory, why not create it somewhere else? Then after the data's been written, just move it into the webroot and overwrite the previous set.
Example:
sh create_some_data.sh > /home/cronuser/my_data.html
mv /home/cronuser/my_data.html /var/www/

php: check if a file is open

I'm writing a document managment system. One of the features that has been requested is that users be able to cause the system to copy certain files to a location that both the users and php have access to. From here the user will be able to directly edit the files. What needs to happen is that the edited file must be automatically saved to the document management system as a new version. My question is with regards to this last point.
I have a page that allows the user to view all versions of a specific file. I was thinking that what would be cool would be to have things that when this page is accessed by anyone then php checks if there is an associated file that was edited and is now closed and simply move it to the right place and give it version info. Would this be possible? For example if a user is editing a file using MS Word, would php be able to know if that file is in use? If yes, how?
Another alternative is to just grab all files that were edited periodically (during the witching hour, for example) and also have a handy 'synchronise' button that users can click to signal that they are done.
here's some stuff I've tried:
flock: i thought it mich return false for files that are in use. mistaken
fstat: doesn't return anything useful as far as I can tell
unlink: I thought I might make a copy of the file then try unlink the original(user edited one). it turns out unlink works on stuff that is open
any ideas?
Also, it needs to work on windows and linux...
Here's some clarification for them what need: if andrew were to click the 'edit' button corresponding to a word file then the word file would be copied to some location. Andrew would then edit it using MS word, save his changes (possible more than once) and close it. That is all I want Andrew to do. I want my code to see that the file is closed then do stuff with it
You can create a file "filename.lock" (or "filename.lck") for each file open.
And you delete the file "filename.lock" (or "filename.lck") when is unlock.
And you can check if file is locked when the file "filename.lock" (or "filename.lck") exists.
If you're running under unix OS, you can implement a strategy like that:
Write a bash script like this lsof | grep /absolute/path/to/file.txt
You can also parameterize that
Recall that script from php
<?php
$result=shell_exec("myScriptPath");
echo($result);
?>
Remember that bash script will return status 0 if no one has file open, 256 (1) otherwise

How to efficiently monitor a directory for changes on linux?

I am working with Magento, and there is a function that merges CSS and Javascript into one big file.
Regardless the pros and cons of that, there is the following problem:
The final file gets cached at multiple levels that include but are not limited to:
Amazon CloudFront
Proxy servers
Clients browser cache
Magento uses an MD5 sum of the concatenated css filenames to generate a new filename for the merged css file. So that every page that has a distinct set of css files gets a proper merged css file.
To work around the caching issue, I also included the file modification timestamps into that hash, so that a new hash is generated, everytime a css file is modified.
So the full advantages of non revalidative caching score, but if something gets changed, its visible instantly, because the resource link has changed.
So far so good:
Only problem is, that the filenames that are used to generate the has, are only the ones that would normally be directly referenced in the HTML-Head block, and don't include css imports inside those files.
So changes in files that are imported inside css files don't result in a new hash.
No I really don't want to recursively parse all out imports and scan them or something like that.
I rather thought about a directory based solution. Is there anything to efficiently monitor the "last change inside a directory" on a file system basis?
We are using ext4.
Or maybe is there another way, maybe with the find command, that does all the job based on inode indexes?
Something like that?
I have seen a lot of programs that instantly "see" changes without scanning whole filesystems. I believe there are also sort of "file manipulation watch" daemons available under linux.
The problem is that the css directory is pretty huge.
Can anyone point me in the right direction?
I suggest you use php-independent daemon to modify change date of your main css file when one of dependent php files are modified. You can use dnotify for it, something like:
dnotify -a -r -b -s /path/to/imported/css/files/ -e touch /path/to/main/css/file;
It will execute 'touch' on main css file each time one of the files in other folder are modified (-a -r -b -s = any access/recursive directory lookup/run in background/no output). Or you can do any other action and test for it from PHP.
If you use the command
ls -ltr `find . -type f `
It will give you a long listing of all files with the newest at the bottom.
Try to have a look to inotify packages that will allows you to be notified eah time a modification occurs in a directory.
InotifyTools
php-inotify
I've never used it, but apparently there is inotify support for PHP.
(inotify would be the most efficient way to get notifications under Linux)

Best way to replace file on server

I have to write script in PHP which will be dynamicly replace some files on server from time to time. Easy thing, but the problem is that I want to avoid situation when user request this file during replacing. Then he could get uncompleted file or even error.
Best solution to me is block access to my site during replacing by e.g. setting .htaccess redirecting all requests to page with information about short break. But normally .htaccess file already exist, so there may be situation when server gets uncomplited .htaccess file.
Is there any way to solve it?
Edit: Thank you so much for all answers, guys. You are briliant.
#ircmaxell Your idea sounds great for me. I read what dudes from PHP.net wrote and I don't know if I understand all correctly.
So, tell me: If I do all steps you wrote and add apc.file_update_protection to my php.ini, there will be no way to get uncompleted file by user by any time? There will be always one, correct file? Are you sure at 100% ?
It is very important to me coz these replacements will be very often and there is big chance to request file during renaming.
Here's something that's easy, and will work on any local filesystem on linux:
Upload (or write) the file to a temporary filename
Move the file (using the mv (move) command, either in FTP, or command line, etc, or the rename command in PHP) to overwrite the existing one.
When you execute the mv command, it basically deletes the old file pointer, and writes the new one. Since it's being done at the filesystem level, it's an atomic operation. So the client can't get an old file...
APC recommends doing this to prevent these very issues from cropping up...
Also note that you could use rsync to do it as well (since it basically does this behind the scenes)...
Doesn't this work already? I never tested for this specifically but I've done what you're doing and that problem never showed up.
It seems like an easy thing for an operating system to
Upload / write to a temporary file
When writing is done, block access to the original file (make the request for the file wait)
Delete the file, rename the temporary one and remove any locks
I'm fairly sure this is what an OS should do for copying. If you're writing the file contents yourself with PHP you'll just have to do this yourself...
Try railless Capistrano or a method they use:
in a directory you have two things:
A folder containering folders, each subfolder is a release
A soft link to the current release folder
When you upload the new file, do the upload making a new release folder. Check to see that no one is currently running the current release (this might be a little tricky assuming you dont have a crazy number of users you could probably do it with a db entry) and then rewrite the softlink to point to the newest release.
maybe do try it like this:
delete file and save it's path
ln -nfs movedfilepath pathtosorrypage.html
upload file to some temporary folder on the server
remove symlink
mv newfile movedfilepath
Option 1: If you have a lot of users and this replacing is done not so frequent, you can set up a maintenance on the site (block access) and have no one log in after a certain time, and finally cut off everyone who is logged in when you're about to do the replacement.
Option 2: If the file replacing is done frequently (in which case you shouldn't do the maintenance every day), have it done by code. Have two of the same files (same folder if you want). Then, by code, when you're about to replace the file, have it just give the copy, while you replace the one you want. You can do it with a simple IF.
Pseudo-code:
if (replaceTime - 15 seconds <= currentTime <= replaceTime + 15 seconds){
// allows 30 seconds for another script to bring in the new image into 'myImage.jpg'
<img src="/myFiles/myOldImage.jpg" />
} else {
<img src="/myFiles/myImage.jpg" />
}
No need to update any database or manually move/copy/rename a file.
After replaceTime + 15 has passed:
copyFileTo("myImage.jpg","myOldImage.jpg");
// Now you have the copy ready for the next time to replace

Categories