any idea how pics upload site save there pics? - php

its more to a architecture related question, sorry if i ask in the wrong stack.
do they put them in a large pile im a folder ?
like
$uid.$md5(random).$name save in one
folder
folder/5231.124wdadace123214.arandomname.jpg
folder/42.15125dawdaowdaw232.arandom2name.png
folder/etc
or
$uid/$md5(random).$name
5231(uid)/12421adwawda2321.arandomname.jpg
42/15125awdawdwadwa232.arandom2name.png
etc/2323awdwadwadaw.logo.png
what im thinking here is the second one is better?
because at windows i have a lot of pics in one folder
and yes it takes time to open it.
do you guys have any idea how they keep the files ?

I wrote a function for my sites that converts user ids into a two level subdirectory hierarchy that limits subdirectories to 1000 at each level.
function get_image_dir($gid) {
$d = str_split(str_pad($gid, 6, "0", STR_PAD_LEFT), 3);
$wdir = "/images/members/" . $d[0] . "/" . $d[1] . "/" . $gid;
return $wdir;
}
(I actually add a third level with the raw user id to handle the rollover at 1,000,000.
/images/members/000/001/1
/images/members/000/002/2
...
/images/members/999/999/999999
/images/members/000/000/1000000
/images/members/000/001/1000001
Within those subdirectories, I further segregate based on
albums (organized by members)
various resizings (for different
places on the site
Final structure looks something like
/images/members/000/001/1/album1/original
/images/members/000/001/1/album1/50x50
/images/members/000/001/1/album1/75x75
/images/members/000/001/1/album1/400x300
The str_split(str_pad()) in the function probably isn't optimal, but for now it works.

This depends mainly on the filesystem. For a modern filesystem like NTFS or ext3, keeping huge numbers of files in the same directory is not a problem, but some older filesystems could not handle it.
However, it may still be a good idea to partition the files into subdirectories according to some scheme, just to keep them manageable with various tools (which may have their own issues with humongous directories) such as backup. BTW, opening a directory in Windows explorer counts as such a case.

it depends how many images you're expecting to have if we're talking about thousand keeping the pictures in diffrent folders make's it easier for the computer to scan the directory for the file

The way i do it is using folder which contain id numbers,
/img/0-100/1
/img/101-200/102
This will give you an easy way of looking up your images and the folder will stay quite small.
And there's no extension because i save this in the database.

Yes, lots of files in one folder can slow down seeks for files in that folder. At least in the major OSs. In theory it doesn't have to be that way.
Other sites use the date too. Not just user ids or image ids. Just another way to do the same thing.

Related

Multiple Small Directories Or One Huge Directory with file naming php mysql

This is a completely theoretical question.
I have a photo storage site in which photos are uploaded by users registered in the website.
The Question
Which of the approach is faster ?
And better for a long term when i need to use a lot of computers and
hard disks?
Is any other approach is there that's even better ?
Now i have thought of two approaches of accomplishing that stuff.
Files uploaded to my server is expected to be huge ~>100 million
Approach 1
These two /pictures/hd/ & /pictures/low/ directories will contain all the files uploaded by the user.
$newfilename = $user_id.time().$filename; //$filename = actual filename of uploaded file
$src = '/pictures/hd/'.$newfilename; //for hd pics
Inserting that into mysql by
insert into pics(`user_id`,`src`)VALUES('$user_id','$newfilename')
Approach 2
These two /pictures/hd/ & /pictures/low/ directories will contain sub-directories of the files uploaded by the user.
This is going to create lots of subdirectories with the name as user_id of the user who is uploading the file into the server.
if (!is_dir('/pictures/hd/'.$user_id.'/')) {
mkdir('/pictures/hd/'.$user_id.'/');
}
$newfilename = $user_id.'/'.$user_id.time().$filename; //$filename = actual filename of uploaded file
$src = '/pictures/hd/'.$newfilename; //for hd pics
Inserting that into mysql by
insert into pics(`user_id`,`src`)VALUES('$user_id','$newfilename')
Retrieval
When retrieving the image i can use the src column of my pics table to get the filename and explore the hd file using the '/pictures/hd/'.$src_of_picstable and lowq files using '/pictures/low/'.$src_of_picstable
The right way to answer the question is to test it.
Which is faster will depend on the number of files and the underlyng filesystem; ext3,4 will quite happily cope with very large numbers of files in a single directory (dentries atr managed in an HTree index). Some filesystems just use simple lists. Others have different ways of optimizing file access.
Your first problem of scaling will be how to manage the file set across multiple disks. Just extending a single filesystem across lots of disks is a bad idea. If you have lots of directories, then you can have lots of mount points. But this doesn't work all that well when you get to terrabytes of data.
However that the content is indexed independently of the file storage means that it doesn't matter what you choose now for your file storage, because you can easily change the mapping of files to location later without having to move your existing dataset around.
I wouldn't suggest single directory approach for two reasons. Firstly, if you're planning to have a lot of images your directory will get really big. And searching for a single image manually will take a lot longer. This will be needed when you debug something ir test new features.
Second reason for multiple directories is that you can smaller backups of part of your gallery. And if you have really big gallery(lets say several terabytes) single hard drive might not be enough to contain them all. With multiple directories you can mount each directory on separate hard drive and this way handle almost infinite size gallery.
My favorite approach is YYYY/MM/type-of-image directory structure. This way you can spot when did you introduce some bug by looking month by month. Also you can make monthly backups without duplicating redundant files. Also making quarterly snapshots of all gallery just in case.
Also about type-of-image there are several types of images that I might need such as original image, small thumbnail, thumbnail, normal image and etc. This way i can just swap type of image and get different image size.
As for you I would suggest YYYY/MM/type-of-image/user_id approach where you could easily find all user uploaded files in one place.

Scan directory tree efficiently by date

What's the most efficient way to grab a list of new files after a given date in php, or perhaps using a system call?
I have full control over how the files are stored as I receive them, so I thought maybe storing them in a folder structure like year/month/day/filename would be best, then all I have to do is scan for the directories greater than or equal to the date I want to retrieve using scandir and casting the directory name to int values. But I am not sure if I'm missing something that would make this easier/faster. I'm interested in the most efficient way of doing this as there will be a lot of files building up over time and I don't want to have to rescan old directories. Basically the directory structure should lend itself well to efficient manual filtering but I wanted to check to see if I'm missing something.
Simple example usage:
'2012/12/1' contains files test1.txt, test2.txt
'2012/12/2' => test3.txt, test4.txt
'2011/11/1' => test5.txt
'2011/11/2' => test6.txt
If I search for files on or after 2011/11/2, then I want everything except test5.txt to be returned.
Thanks in advance for any insight!
edit: the storing and actual processing of files are two separate processes, so I can't just process them as they come in which would obviously be the best solution.
Generally speaking I create directories like YYYY/MM/DD to store my files, often with another level for different sources. Sometimes I'll use YYYY-MM/DD or something similar. Note that there are only 3652 days in a decade, so you could even have a single level like YYYY-MM-DD and not get directories that are so large that they're hard to work with. If you have a filesystem that indexes directories, you can easily have 10s of thousands of files in a directory, otherwise one thousand should probably be your upper limit.
To process the files, I don't bother doing any actual searching of directory names. Since I know what date I'm interested in, I can simply generate the paths and scan only the directories containing files in the proper date range.
For example, let's say I want to process all files for the past week:
for $date = today() - 7 to today():
$path = strftime("%Y/%m/%d", $date)
for $filename in getFiles($path):
processFile($path, $filename)
It looks like you are on either linux or mac based on how you wrote your path.
The find command can return a list of files modified (or accessed) within a certain date.
// find files that were modified less than 30m ago
$filelist = system("find /path/to/files -type f -mmin -30");
I think system calls should be used sparingly since they reduce portability.
Storing in directories as you mentioned makes sense as it will reduce the search space.

Folder Structure for storing millions of images?

I am building a site that is looking at Millions of photos being uploaded easily (with 3 thumbnails each for each image uploaded) and I need to find the best method for storing all these images.
I've searched and found examples of images stored as hashes.... for example...
If I upload, coolparty.jpg, my script would convert it to an Md5 hash resulting in..
dcehwd8y4fcf42wduasdha.jpg
and that's stored in /dc/eh/wd/dcehwd8y4fcf42wduasdha.jpg
but for the 3 thumbnails I don't know how to store them
QUESTIONS..
Is this the correct way to store these images?
How would I store thumbnails?
In PHP what is example code for storing these images using the method above?
How am I using the folder structure:
I'm uploading the photo, and move it like you said:
$image = md5_file($_FILES['image']['tmp_name']);
// you can add a random number to the file name just to make sure your images will be "unique"
$image = md5(mt_rand().$image);
$folder = $image[0]."/".$image[1]."/".$image[2]."/";
// IMAGES_PATH is a constant stored in my global config
define('IMAGES_PATH', '/path/to/my/images/');
// coolparty = f3d40fc20a86e4bf8ab717a6166a02d4
$folder = IMAGES_PATH.$folder.'f3d40fc20a86e4bf8ab717a6166a02d4.jpg';
// thumbnail, I just append the t_ before image name
$folder = IMAGES_PATH.$folder.'t_f3d40fc20a86e4bf8ab717a6166a02d4.jpg';
// move_uploaded_file(), with thumbnail after process
// also make sure you create the folders in mkdir() before you move them
I do believe is the base way, of course you can change the folder structure to a more deep one, like you said, with 2 characters if you will have millions of images.
The reason you would use a method like that is simply to reduce the total number of files per directory (inodes).
Using the method you have described (3 levels deeps) you are very unlikely to reach even hundreds of images per directory since you will have a max number of directories of almost 17MM. 16**6.
As far as your questions.
Yeah, that is a fine way to store them.
The way I would do it would be
/aa/bb/cc/aabbccdddddddddddddd_thumb.jpg
/aa/bb/cc/aabbccdddddddddddddd_large.jpg
/aa/bb/cc/aabbccdddddddddddddd_full.jpg
or similar
There are plenty of examples on the net as far as how to actually store images. Do you have a more specific question?
If you're talking millions of photos, I would suggest you farm these off to a third party such as Amazon Web Services, more specifically for this Amazon S3. There is no limit for the number of files and, assuming you don't need to actually list the files, there is no need to separate them into directories at all (and if you do need to list, you can use different delimeters and prefixes - http://docs.amazonwebservices.com/AmazonS3/latest/dev/ListingKeysHierarchy.html). And your hosting/rereival costs will probably be lower than doing yourself - and they get backed up.
To answer more specifically, yes, split by sub directories; using your structure, you can drop the first 5 characters of the filename as you alsready have it in the directory name.
And thumbs, as suggested by aquinas, just appent _thumb1 etc to the filename. Or store in separate folders themsevles.
1) That's something only you can answer. Generally, I prefer to store the images in the database so you can have ONE consistent backup, but YMMV.
2) How? How about /dc/eh/wd/dcehwd8y4fcf42wduasdha_thumb1.jpg, /dc/eh/wd/dcehwd8y4fcf42wduasdha_thumb2.jpg and /dc/eh/wd/dcehwd8y4fcf42wduasdha_thumb3.jpg
3) ??? Are you asking how to write a file to the file system or...?
Improve Answer.
For millions of Images, as yes, it is correct that using database will slow down the process
The best option will be either use "Server File System" to store images and use .htaccess to add security.
or you can use web-services. many servers like provide Images Api for uploading, displaying.
You can go on that option also. For example Amazon

Fast access to files

I'm currently building an application that will generate a large number of images (a few tens of thousand of images, possibly more but not in the near future at least). And I want to be able to determine whether a file exists or not and also send it to clients over http (I'm using apache is my web server).
What is the best way to do this? I thought about splitting the images to a few folders and reduce the number of files in each directory. For example lets say that I decide that each file name will begin with a lower letter from the abc. Than I create 26 directories and when I want to look for a file I will add the name of the directory first. For example If I want a file called "funnyimage2.jpg" I will save it inside a directory called "f". I can add layers to that structure if that is required.
To be honest I'm not even sure if just saving all the files in one directory isn't just as good, so if you could add an explanation as to why your solution is better it would be very helpful.
p.s
My application is written in PHP and I intend to use file_exists to check if a file exists or not.
Do it with a hash, such as md5 or sha1 and then use 2 characters for each segment of the path. If you go 4 levels deep you'll always be good:
f4/a7/b4/66/funnyimage.jpg
Oh an the reason its slow to dump it all in 1 directory, is because most filesystems don't store filenames in a B-TREE or similar structure. It will have to scan the entire directory to find a file often times.
The reason a hash is great, is because it has really good distribution. 26 directories may not cut it, especially if lots of images have a filename like "image0001.jpg"
Since ext3 aims to be backwards compatible with the earlier ext2, many of the on-disk structures are similar to those of ext2. Consequently, ext3 lacks recent features, such as extents, dynamic allocation of inodes, and block suballocation.[15] A directory can have at most 31998 subdirectories, because an inode can have at most 32000 links.[16]
A directory on a unix file system is just a file that lists filenames and what inode contains the actual file data. As such, scanning a directory for a particular filename boils down to the equivalent operation of opening a text file and scanning for a line with a particular piece of text.
At some point, the overhead of opening that directory "file" and scanning for your filename will outweigh the overhead of using multiple sub-directories. Generally, this won't happen until there's many thousands of files. You should benchmark your system/server to find where the crossover point is.
After that, it's a simple matter of deciding how to split your filenames into subdirectories. If you're allowing only alpha-numeric characters, then maybe a split based on the first 2 characters (1,296 possible subdirs) might make more sense than a single dir with 10,000 files.
Of course, for every additional level of splitting you add, you're forcing the system to open yet another directory "file" and scan for your filename, so don't go too deep on the splits.
your setup is okay. Keep going this way
It seems that you are on the right path. Another post at ServerFault seems to confirm that you are doing the right thing.
I think linux has a limit to the amount of files a directory can contain; it might be best to split them up.
With your method, you can have the same exact image with many different file names. Also, you'll have more images that start with "t" than you would "q" so the directory would still get large. You might want to store them as MD5-HASH.jpg instead. This will eliminate duplicates and have a more even distribution over 36 directories.
Edit: Like Evert mentions, you can do a multi-level directory structure to keep the directory size even smaller.

Saving torrents on a torrent index site

I'm building a torrent site where users can upload torrents.
What would be a good way to save the .torrent files?
I can think of several options:
Saving the torrent file itself in a folder on the server (not the best option since OS's have limitations saving lots of files in 1 folder)
Saving the torrent file itself in different folders per month
Saving the contents of the torrent file in the database (any limitations / performance issues / any other caveats?)
Any other options?
If you're concerned about having too much files within a directory, you need to distribute the files across multiple directories. Storing them by month, day or week is one way how to do so. It depends a bit how many files you really have I would say.
You can try to more or less equally distribute the files within subdirectories by hashing their filename and use the whole or part of the hash to generate one or multiple subdirectory names:
$hash = md5($fileName);
$srotePath = sprintf('%s/%s', substr($hash,0,2), $fileName);
This would pick the first two character from an md5 hash (00-ff, 256 subdirectories) to generate the subdirectory.
The benefit compared with a date is, that you always can find out in which directory a file is stored when you have it's name.
That does also mean, that you can not have duplicate files with the same name (which might have worked for the date based subfolder).
I'm use this:
Saving the torrent file itself in different folders per month
using database is not at all good. just save them as static files and maybe even gzip them .
just make sure to rename them uniquely with some kind of hashing .
if you don't have any problem in using external provider you can use TorCache
I would say saving the .torrent file in a weekly/monthly folder is the best option.
That way you can use the OS' filesystem cache, even if you store the .torrents outside the document root for limiting user access (in the end you will have to open() the file anyway)
Leaving torrents in the database would eventually lead to slow performance as the DB increases in size.
May be you try Amazon S3? It's cheap, easy and fast.
Uploading them automatically saves .torrent files. http://www.tizag.com/phpT/fileupload.php has a good example. Give it a try.

Categories