I have two WordPress sites on two different servers. Site B creates a .CSV file with the latest comments. Site A reads that file and gets the information and performs some functions on it. These are two independent processes that are on separate servers.
I create the CSV in 'append' mode so that I can compile the new comments without fear of skipping any while running the function on the other side:
$fp = fopen('new_comments.csv', 'a');
However, once I get the .CSV on Site A, I have no way to write to the .CSV and tell it that I have read the contents.
I suppose I could overwrite the .CSV data in 'w' mode and only run it once right before I run the other function, but is there any other way to make sure that once I read from the .CSV on Site A, I can refresh the .CSV on Site B?
Use the filesystem function filemtime to get the last modification time of the file. You can then do a check in your second server to see if it has been already read.
Similarly, you can do it with the size of the file.
(You could also put a time period of "Grace" between the two servers to do a check in there).
Related
I have 1000 plus txt files with file name as usernames. Now i'm reading it by using loop. here is my code
for($i=0; $i<1240; $i++){
$node=$users_array[$i];
$read_file="Uploads/".$node."/".$node.".txt";
if (file_exists($read_file)) {
if(filesize($read_file) > 0){
$myfile = fopen($read_file, "r");
$file_str =fread($myfile,filesize($read_file));
fclose($myfile);
}
}
}
when loop runs, it takes too much time and server gets timed out.
I don't know why it is taking that much time because files have not much data in it. read all text from a txt file should be fast. am i right?
Well, you are doing read operations on HDD/SSD which are not as fast as memory, so you should expect a high running time depending on how big the text files are. You can try the following:
if you are running the script from browser, I recommend running it from command line, this way you will not get a web server time out and the script will manage to finish if there is no time execution limit set on php, case in which maybe you should increase it
on your script above you can set "filesize($read_file)" into a variable so that you do not execute it twice, it might improve running the script
if you still can't finish the job consider running it in batches of 100 or 500
keep an eye on memory usage, maybe that is why the script dies
if you need the content of the file as a string you can try "file_get_contents" and maybe skip "filesize" check all together
It sounds like your problem is having 1000+ files in a single directory. On a traditional Unix file system, finding a single file by name requires scanning through the directory entries one by one. If you have a list of files and try to read all of them, it'll require traversing about 500000 directory entries, and it will be slow. It's an O(n^2) algorithm and it'll only get worse as you add files.
Newer file systems have options to enable more efficient directory access (for example https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Hash_Tree_Directories) but if you can't/don't want to change file system options you'll have to split your files into directories.
For example, you could take the first two letters of the user name and use that as the directory. That's not great because you'll get an uneven distribution, it would be better to use a hash, but then it'll be difficult to find entries by hand.
Alternatively you could iterate the directory entries (with opendir and readdir) and check if the file names match your users, and leave dealing with the problems the huge directory creates for later.
Alternatively, look into using a database for your storage layer.
So I have a gaming server that automatically sends the amount of online players to a .php file that updates a .txt file to the current amount of players every minute.
However, when I try to write the contents of the .txt file into my website, it doesn't read the .txt file at all. The .txt just contains 1 number.
Example:
players.txt contains one number, that number is 11 (for players online).
<h5> Come play with <?php echo file_get_contents("players.txt");?> other players</h5>
The outcome is "Come play with other players".
It reads the file but assumes it's empty. Server would throw an error in case file not existed or it couldn't be opened because of permissions or other reasons.
Try this:
1. Echo random string ex. <?php echo '12' ?>
2. Create another file for ex. players1.txt with random number and get it's contents the same way ex. <?php echo file_get_contents('players1.txt'); ?>
I'm not a great web developer but a .txt file shouldn't be this hard to read, especially when it's 1 number.
This is subjective to the fact you know it's a txt file and you know it should contain a number, the issue you're encountering is probably logical rather than subjective. It could be an image file or a .dat file, if a process is writing to the file that file could be locked so other processes can not read the file, or the reader is denied access due to ownership, regardless of the file type or contents.
What does your PHP error log say?
Check the file has the correct group/owner permissions and update them as required.
Have you been clearing your static PHP cache to keep any retrieved values up to date?
Are you writing to the file with correct locking?
Using a Database is far, far more efficient and performant to using a filesystem to track this sort of data.
If you're using Unity be aware that it's local file and online security provisions are absolutely appauling, not saying this is an issue for you specifically, but just as a general proviso, limit accessability as much as possible. Always clean your data.
$valueToPlaceIntoFile = (int)$_POST['playerCounter'];
I will be sending new files over from one computer to another computer. How do I make PHP auto detect new/updated files in the folders and enter the information inside the files into mysql database?
Get all files you already know from the database
loop through the directory with http://www.php.net/manual/de/function.readdir.php
if the file is known, do nothing
if the file is not known, add it to the database
In the end, delete all files no longer in the directory
I would pick a set-up where new files and old fields are in a separate directory.
But if you have no choice, you could check the modification date and match it with your last directory iteration. (Use filemtime for this).
Don't forget to do some database checking when you process an image though.
Save the timestamp of the last check and when you check next look at the fileinfo and check creation date. Even better yet because you store filecontens in a database, check for the time it was modified using: filemtime()
You can't. PHP works as a preprocessor and even it has execution time limit (set in the configuration). If you need to process with PHP then make a PHP script that outputs a web page that use meta redirection to itself. Inside the script, you should loop over the files, query the database for the file name and its modification time, if it exists then nothing to do, otherwise, if the file name exists then it's an update, otherwise it's a new file.
I am in the middle of making a script to upload files via php. What I would like to know, is how to display the files already uploaded, and when clicking on them open them for download. Should I store the names and path in a database, or just list the conents of a directory with php?
Check out handling file uploads in PHP. A few points:
Ideally you want to allow the user to upload multiple files at the same time. Just create extra file inputs dynamically with Javascript for this;
When you get an upload, make sure you check that it is an upload with is_uploaded_file;
Use move_uploaded_file() to copy the file to wherever you're going to store it;
Don't rely on what the client tells you the MIME type is;
Sending them back to the client can be done trivially with a PHP script but you need to know the right MIME type;
Try and verify that what you get is what you expect (eg if it is a PDF file use a library to verify that it is), particularly if you use the file for anything or send it to anyone else; and
I would recommend you store the file name of the file from the client's computer and display that to them regardless of what you store it as. The user is just more likely to recognise this than anything else.
Storing paths in the database might be okay, depending on your specific application, but consider storing the filenames in the database and construct your paths to those files in PHP in a single place. That way, if you end up moving all uploaded files later, there is only one place in your code you need to change path generation, and you can avoid doing a large amount of data transformation on your "path" field in the database.
For example, for the file 1234.txt, you might store it in:
/your_web_directory/uploaded_files/1/2/3/1234.txt
You can use a configuration file or if you prefer, a global somewhere to define the path where your uploads are stored (/your web directory/uploaded files/) and then split characters from the filename (in the database) to figure out which subdirectory the file actually resides in.
As for displaying your files, you can simply load your list of files from the database and use a path-generating function to get download paths for each one based on their filenames. If you want to paginate the list of files, try using something like START 0, LIMIT 50; in mySQL. Just pass in a new start number with each successive page of upload results.
maybe you should use files, in this sense:
myfile.txt
My Uploaded File||my_upload_dir/my_uploaded_file.pdf
Other Uploaded File||my_upload_dir/other_uploaded.html
and go through them like this:
<?php
$file = "myfile.txt";
$lines = file($file);
$files = array();
for($i=0;$i<=count($lines)-1;$i++) {
$parts = explode($lines[$i]);
$name = parts[0];
$filename = parts[1];
$files[$i][0] = $name;
$files[$i][1] = $filename;
}
print_r($files);
?>
hope this helps. :)
What I always did (past tense, I haven't written an upload script for ages) is, I'd link up an upload script (any upload script) to a simple database.
This offers some advantages;
You do not offer your users direct insight to your file system (what if there is a leak in your 'browse'-script and you expose your whole harddrive?
You can store extra information and meta-data in an easy and efficient way
You can actually query for files / meta-data instead of just looping through all the files
You can enable a 'safe-delete', where you delete the row, but keep the file (for example)
You can enable logging way more easily
Showing files in pages is easier
You can 'mask' files. Using a database enables you to store a 'masked' filename, and a 'real' filename.
Obviously, there are some disadvantages as well;
It is a little harder to migrate, since your file system and database have to be in sync
If an operation fails (on one of both ends) you have either a 'corrupt' database or file system
As mentioned before (but we can not mention enough, I'm afraid); _Keep your uploading safe!_
The MIME type / extension issue is one that is going on for ages.. I think most of the web is solid nowadays, but there used to be a time when developers would check either MIME type or extension, but never both (why bother?). This resulted in websites being very, very leaky.
If not written properly, upload scripts are big hole in your security. A great example of that is a website I 'hacked' a while back (on their request, of course). They supported the upload of images to a photoalbum, but they only checked on file extension. So I uploaded a GIF, with a directory scanner inside. This allowed me to scan through their whole system (since it wasn't a dedicated server; I could see a little more then that).
Hope I helped ;)
I am working on a database program using PHP to keep track of the products we manage at my workplace.
For this project, I need to be able to select an .XLS file which contains new product data.
New data consists of the following fields:
Type CHAR(3),
Line CHAR(2),
Number INT,
Measure INT,
Comments VARCHAR(255),
Variation CHAR(1) i.e.('Y' || 'N')
These files are created in Excel, or Google Docs; I have found a wonderful excel_reader which allows me to extract the values from this file.
As this is an action which will happen routinely, as new products are created, so I do not want the file to be stored in my server directory (after a while there would be dozens!).
I would rather that the file simply be read, because the import script I'm writing transfers the file's data into an array.
What I really want to happen is to have the user select the file's location (on their local computer) through an HTML form, and then have the script save that file's contents to a MySQL database without ever sending the file to the Server.
I would greatly appreciate any advice you can offer me, I'm not even sure that my plan is a valid way to handle this situation.
It will have to be stored, at least temporarily. Delete the file after you have what you need from it (presumably after moving it out of the temp directory using move_uploaded_file, to the folder from which you will read it), then remove it using unlink.
As a last point, I would be a little worried about immediate deletion of uploaded files. What if something goes wrong with the script while the file is being parsed and data stored in the database? It would probably be a good idea to have a cron job that periodically deletes the files, to be on the safe side, instead of deleting them immediately.
Since your PHP script is running on the server, the Excel file will have to be saved to the server to be read. Once you've read the file and stored it in the database, just delete it.
I found the answer to my question here. It is a nice tutorial on uploading, moving, reading, and deleting files using PHP.
Thank you to all who contributed.
I was struggling to do the same (for xlsx though).
The solution is to use the $_FILES['file']['tmp_name'],
where file is the input name.
Regards :)