I have copied a number of PDF files from one directory to another directory. Unfortunately, some file have not been copied, and I want to identify those files.
For this purpose I want to write a PHP program which finds those files. What I have so far is a program which compares file names, but I want to check for file size. How can I accomplish this?
Related
As we all know wordpress stores your uploaded files (for me,it's just JPG files) in a folder named "uploads" under "wp-content". Files are separated into folders based on year and month.
Now i want to copy every file from every folder into a single folder on another server (for some purposes). I want to know, does wordpress rename duplicate files? is it possible that my files be overwritten on the new server?
If yes, how can i avoid this? is there a way to make wordpress rename files before storing them?
You can scan your uploaded file folder and you have to options:
1.- Set a random name for each file
2.- Set a name convention including path and file name, for example: my_path_my_filename.jpg
By the way your file wont be overwritten cause is another server
This question seems about export/import...
Check exported XML (WordPress eXtended RSS file format), you can download all media URLs at <wp:attachment_url> tag... Use any XML parser.
Example without parser, at terminal:
cat exportedSite.xml | grep wp:attachment_url
will list all URLs. Each parsed URL can be downloaded by curl or wget.
If you whant to restore the XML backup, change (only) the URLs of the wp:attachment_url tags by the new repo URLs
I have a recursive function which generates about 200.txt files. I am running this application on a local server.
Basically, the front end just has a file upload field, which you just choose a .csv, which it then generates all the .txt files from that, but rather than saving them on the wamp server, is it possible to save them in a specific location?
Example, if I put another field in my front end called 'fileLocation', and the user types in the pathname.
Obviously i'd have to check if it's a directory etc, but is this possible to say save all the files on:
/Volumes/computer/Users/username/Desktop/test/
I'm not sure where to proceed with this.
No, is not possible to access computer files this way by using a localhost. You could zip all files and make the browser download them. Like is described here
Is there any way to upload multiple files using a single file?...basically i want to upload multiple pdf files at once, using one single file that contains the path to each one of the pdf files...and store the information to mysql database...
PS: i dont want to merge all the files into 1 huge pdf...i want each 1 of pdf file to be uploaded to server dir at once and then store the file info to database eg. path, file info, filename for later use..
In order for a file to be uploaded, the user has to select that file manually. It's a security measure (otherwise websites could examine arbitrary files on your computer without your knowledge, which would be bad).
No - Because it would break the Javascript sandbox model (i.e. would be a security problem).
For security concern, it's hard to do this by javascript, which means you will have the access to others local files.
Why not just pack them up into a zip file then unzip on the sever side?
Not sure if this is possible or not?
I need contents of a file directory from another server in order to make a photogallery on another server.
Let's say I have a Folder on server1 named "folderName1" and the contents in the folder are images, like:
2005-14-05-this-that.jpg
2005-14-06-this-that.jpg
2005-14-07-this-that.jpg
2005-14-08-this-that.jpg
2005-14-09-this-that.jpg....
In order to make use this gallery script, I need to get a text file with this information on it.. Some folders I have 1000's of photos in them and it takes to long to write them all down..
Wondering if there is a shortcut to GET all contents from a folder and spit them out in a text file??
Thanks!!
http://php.net/manual/en/function.readdir.php
place a script on server1 (perhaps in each directory that has photo's) called 'imagelist.php'. This script loops all files according to the function I placed above and echoes every image on it's own line.
Then server2 could request this file using file_get_contents() and loop everyline and use the filenames to create a gallery.
If the server containing the images is in your control, you can have a PHP script list out all the image names by using readdir() function. Then you call this script from the other server and read/parse all the files names.
If you dont control the server hosting the files then this is not really possible unless they have directory listing enabled on that images directory.
This may be a simple question or a pretty complex one, ill let you be the deciders.
Using PHP To open a zip file, extract the files to a directory and close the zip file is not a complicated class to make.
But lets say that the file is not a zip, but yet is able to be read by WinRar, examples of these files are like exe's SFX archives etc.
What factors do all these files have in conmen to allow WinRar to browse the source of them.
Another example is Anti Virus Software, that individually scan files within an EXE ?
So what an example:
$handle = fopen("an_unknown_file.abc", "rb");
while (!feof($handle))
{
//What generic code could I use to determain weather the file can be extracted ?
}
fclose($handle);
Regards.
Zip's specifications allow the actual "zip" file portion to be embedded ANYWHERE within a file. It doesn't necessarily have to start at position '0' in the file. This is how self-extracting zips work. It's a small .exe stub program which has a larger .zip file appended to the end of it.
Finding a zip is mostly a matter of scanning for a zip file's "magic number" within a file, then doing a few heuristics to determine if it's really a zip file, or just something random that happens to contain a zip's magic number.
A .docx file is really just a .zip that contains various XML files representing a Word file's contents. Just like a .jar is a zip file that contains various different chunks of Java code.
Winrar's got a bunch of extra code within it to scan through a file and look for any identifiable "this is a compress archive" type signatures, one of which happens to be that of a zip file's.
There's nothing too magical about it. It's just a matter of scanning through a file and looking for signatures.
Not sure what exactly is your question, but I think you are confusing something here... File extension can be described as just a convenient way for humans and computers to relate file extensions to the type of the file/programs that work with them. WinRar (or any other program) reads what the file contains and if it can understand it - it works with it. The only important thing is that the file format (data in the file) is valid and that the program you are using can work with this file format.
So, if a file is in any format that WinRar can work with (.rar, .zip, .gz, etc.), it's extension could be .txt or .whatever and WinRar will still be able to work with it. Extension is just for convenience.