I have a foreach loop that is processing image uploads. When I add multiple images they are given a filename that consists of a unique id and a time stamp using $filebase = uniqid() . '_' . time();
When the images are processed they have the same image name, and I don't understand why the foreach loop isn't giving unique file names?
I have the $temp variable assigned to the index of the foreach loop with the following line: $temp = $_FILES['standard-upload-files']['tmp_name'][$index]; so I don't understand why it doesn't increment a new value?
The images are being moved correctly to the destination folder (albeit with the same name) so I know it isn't an issue with my HTML form, or the initial foreach loop.
if(isset($_POST['submit-images'])) {
foreach($_FILES['standard-upload-files']['name'] as $index => $filename ) {
if($_FILES['standard-upload-files']['error'][$index] === 0 ) {
$temp = $_FILES['standard-upload-files']['tmp_name'][$index];
$filebase = uniqid() . '_' . time();
move_uploaded_file($temp, "images-download/{$filebase}");
}
} // main foreach
} // isset($_POST)
I see, my first guess is using uniqid. Uniqid is generated based on the current time in microseconds which could result in the same values. As the documentation mentions, this function does not guarantee the uniqueness of the return value.
Since most systems adjust the system clock by NTP or like, system time is changed constantly. Therefore, it is possible that this function does not return a unique ID for the process/thread. Use more_entropy to increase the likelihood of uniqueness.
A [not so reliable] solution would be setting the more_entropy to true:
uniqid('', true)
but again, remember this approach is not reliable.
So use another method for generating your unique IDs (UUID for example as explained in the comments here.)
Related
As the title sais, I'm trying to get the next and previous file from the same directory. So I did some this like this. Is there any better way of doing it? (This is from next auto index file.php code about related files, I have change it for my needs.)
db screenshot if you want to look - ibb.co/wzkDxd3
$title = $file->name; //get the current file name
$in_dir=$file->indir; //current dir id
$r_file = $db->select("SELECT * FROM `". MAI_PREFIX ."files` WHERE `indir`='$in_dir'"); //all of the file from the current dir
$rcount=count($r_file);
$related='';
if($rcount > 2){
$i = 0; // temp variable
foreach($r_file as $key => $r){ //foreach the array to get the key
if($r->name == $title){ //Trying to get the current file key number
$next =$key+1; //Getting next and prev file key number
$prv =$key-1;
foreach($r_file as $keyy => $e){ //getting the file list again to get the prev file
if($prv == $keyy){
$related .=$e->name;
}
}
foreach($r_file as $keyy => $e){ // same for the next file
if($next == $keyy){
$related .=$e->name;
}
}
}
}
Without knowing your DB background and use case, there still should be the possibility to use something like $r_file[$key], $r_file[$next] and $r_file[$prev] to directly access the specific elements. So at least two of your foreach loops could be avoided.
Please note, that nesting loops is extremely inefficient. E. g., if your $r_file contains 100 elements, this would mean 10.000 iterations (100 times 100) with your original code!
Also, you should leave a loop as soon as possible once its task is done. You can use break to do this.
Example, based on the relevant part of your code and how I understand it is supposed to work:
foreach($r_file as $key => $r){ //foreach the array to get the key
if($r->name == $title) { //Trying to get the current file key number
$next =$key+1; //Getting next and prev file key number
$prv =$key-1;
$related .= $r_file[$prv]->name; //Directly accessing the previous file
$related .= $r_file[$next]->name; //Directly accessing the next file
break; // Don't go on with the rest of the elements, if we're already done
}
}
Possibly, looping through all the elements to compare $r->name == $title could also be avoided by using some numbering mechanisms, but without knowing your system better, I can't tell anything more about that.
I have seen several websites where if you upload an image and an identical image already exists on there servers they will reject the submission. Using PNGs is there an easy way to check one image against a massive folder of images?
http://www.imagemagick.org/discourse-server/viewtopic.php?t=12618
I did find this with imagemagick, but I am looking for one vs many and not one to one a million
You can transform the file content into a sha1. That will give you a way to identify two pictures strictly identical.
see http://php.net/manual/fr/function.sha1-file.php
Then after you save it into a NFS, or use some kind of database to test if the hash already exists.
Details of the images are probably maintained in a database; while the images are stored in the filesystem. And that database probably has a hash column which is used to store an md5 hash of the image file itself, calculated when the image is first uploaded. When a new image is uploaded, it calculates the hash for that image, and then checks to see if any other image detail in the database has a matching hash. If not, it stores the newly uploaded image with that hash; otherwise it can respond with details of the previous upload. If the hash column is indexed in the table, then this check is pretty quick.
If I understood your question correctly. You want to find out if a specific image exists in a Directory with so many images, right? If so, take a look at the solution:
<?php
// CREATE A FUNCTION WHICH RETURNS AN ARRAY OF ALL IMAGES IN A SPECIFIC FOLDER
function getAllImagesInFolder($dir_full_path){
$returnable = array();
$files_in_dir = scandir($dir_full_path);
$reg_fx = '#(\.png|\.jpg|\.bmp|\.gif|\.jpeg)#';
foreach($files_in_dir as $key=>$val){
$temp_file_or_dir = $dir_full_path . DIRECTORY_SEPARATOR . $val;
if(is_file($temp_file_or_dir) && preg_match($reg_fx, $val) ){
$regx_dot_wateva = '/\.{2,4}$/';
$regx_dot = '/\./';
$regx_array = array($regx_dot_wateva, $regx_dot);
$replace_array = array("", "_");
$return_val = preg_replace($regx_array, $replace_array, $val);
$returnable[$return_val] = $temp_file_or_dir ;
}else if(is_dir($temp_file_or_dir) && !preg_match('/^\..*/', $val) ){
getFilesInFolder($temp_file_or_dir);
}
}
return $returnable;
}
// CREATE ANOTHER FUNCTION TO CHECK IF THE SPECIFIED IMAGE EXISTS IN THE GIVEN DIRECTORY.
// THE FIRST PARAMETER SHOULD BE THE RESULT OF CALLING THE PREVIOUS FUNCTION: getAllImagesInFolder(...)
// THE SECOND PARAMETER IS THE IMAGE YOU WANT TO SEARCH WHETHER IT EXISTS IN THE SAID FOLDER OR NOT
function imageExistsInFolder($arrImagesInFolder, $searchedImage){
if(!is_array($arrImagesInFolder) && count($arrImagesInFolder) < 1){
return false;
}
foreach($arrImagesInFolder as $strKey=>$imgPath){
if(stristr($imgPath, $searchedImage)){
return true;
}
}
return false;
}
// NOW GET ALL THE IMAGES IN A SPECIFIED FOLDER AND ASSIGN THE RESULTING ARRAY TO A VARIABLE: $imgFiles
$imgFolder = "/path/to/directory/where/there/are/images";
$arrImgFiles = getAllImagesInFolder($imgFolder);
$searchedImage = "sandwich.jpg"; //<== OR EVEN WITHOUT THE EXTENSION, JUST "sandwich"
// ASSUMING THE SPECIFIC IMAGE YOU WANT TO MATCH IS CALLED sandwich.jpg
// YOU CAN USE THE imageExistsInFolder(...) FUNCTION TO RETURN A BOOLEAN FLAG OF true OR false
// DEPENDING ON IF IT DOES OR NOT.
var_dump($arrImgFiles);
var_dump( imageExistsInFolder($arrImgFiles, $searchedImage) );
I am successfully able to get random images from my 'uploads' directory with my code but the issue is that it has multiple images repeat. I will reload the page and the same image will show 2 - 15 times without changing. I thought about setting a cookie for the previous image but the execution of how to do this is frying my brain. I'll post what I have here, any help would be great.
$files = glob($dir . '/*.*');
$file = array_rand($files);
$filename = $files[$file];
$search = array_search($_COOKIE['prev'], $files);
if ($_COOKIE['prev'] == $filename) {
unset($files[$search]);
$filename = $files[$file];
setcookie('prev', $filename);
}
Similar to slicks answer, but a little more simple on the session front:
Instead of using array_rand to randomise the array, you can use a custom process that reorders based on just a rand:
$files = array_values(glob($dir . '/*.*'));
$randomFiles = array();
while(count($files) > 0) {
$randomIndex = rand(0, count($files) - 1);
$randomFiles[] = $files[$randomIndex];
unset($files[$randomIndex]);
$files = array_values($files);
}
This is useful because you can seed the rand function, meaning it will always generate the same random numbers. Just add (before you randomise the array):
if($_COOKIE['key']) {
$microtime = $_COOKIE['key'];
else {
$microtime = microtime();
setcookie('key', $microtime);
}
srand($microtime);
This does means that someone can manipulate the order of the images by manipulating the cookie, but if you're okay with that this this should work.
So you want to have no repeats per request? Use session. Best way to avoid repetitions is to have two arrays (buckets). First one will contains all available elements that your will pick from. The second array will be empty for now.
Then start picking items from first array and move them from 1st array to the second. (Remove and array_push to the second). Do this in a loop. On the next iteration first array won't have the element you picked already so you will avoid duplicates.
In general. Move items from a bucket to a bucket and you're done. Additionally you can store your results in session instead of cookies? Server side storage is better for that kind of things.
I have a project that needs to create files using the fwrite in php. What I want to do is to make it generic, I want to make each file unique and dont overwrite on the others.
I am creating a project that will record the text from a php form and save it as html, so I want to output to have generated-file1.html and generated-file2.html, etc.. Thank you.
This will give you a count of the number of html files in a given directory
$filecount = count(glob("/Path/to/your/files/*.html"));
and then your new filename will be something like:
$generated_file_name = "generated-file".($filecount+1).".html";
and then fwrite using $generated_file_name
Although I've had to do a similar thing recently and used uniq instead. Like this:
$generated_file_name = md5(uniqid(mt_rand(), true)).".html";
I would suggest using the time as the first part of the filename (as that should then result in files being listed in chronological/alphabetic order, and then borrow from #TomcatExodus to improve the chances of the filename being unique (incase of two submissions being simultaneous).
<?php
$data = $_POST;
$md5 = md5( $data );
$time = time();
$filename_prefix = 'generated_file';
$filename_extn = 'htm';
$filename = $filename_prefix.'-'.$time.'-'.$md5.'.'.$filename_extn;
if( file_exists( $filename ) ){
# EXTREMELY UNLIKELY, unless two forms with the same content and at the same time are submitted
$filename = $filename_prefix.'-'.$time.'-'.$md5.'-'.uniqid().'.'.$filename_extn;
# IMPROBABLE that this will clash now...
}
if( file_exists( $filename ) ){
# Handle the Error Condition
}else{
file_put_contents( $filename , 'Whatever the File Content Should Be...' );
}
This would produce filenames like:
generated_file-1300080525-46ea0d5b246d2841744c26f72a86fc29.htm
generated_file-1300092315-5d350416626ab6bd2868aa84fe10f70c.htm
generated_file-1300109456-77eae508ae79df1ba5e2b2ada645e2ee.htm
If you want to make absolutely sure that you will not overwrite an existing file you could append a uniqid() to the filename. If you want it to be sequential you'll have to read existing files from your filesystem and calculate the next increment which can result in an IO overhead.
I'd go with the uniqid() method :)
If your implementation should result in unique form results every time (therefore unique files) you could hash form data into a filename, giving you unique paths, as well as the opportunity to quickly sort out duplicates;
// capture all posted form data into an array
// validate and sanitize as necessary
$data = $_POST;
// hash data for filename
$fname = md5(serialize($data));
$fpath = 'path/to/dir/' . $fname . '.html';
if(!file_exists($fpath)){
//write data to $fpath
}
Do something like this:
$i = 0;
while (file_exists("file-".$i.".html")) {
$i++;
}
$file = fopen("file-".$i.".html");
When a user uploads an image to my site, the image goes through this process;
user uploads pic
store pic metadata in db, giving the image a unique id
async image processing (thumbnail creation, cropping, etc)
all images are stored in the same uploads folder
So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.
EDIT:
Creating username (or more specifically, userid) subfolders would seem to be a good solution. With a bit more digging, I've found some great info right here; How to store images in your filesystem
However, would this userid dir approach scale well if a CDN is bought into the equation?
I've answered a similar question before but I can't find it, maybe the OP deleted his question...
Anyway, Adams solution seems to be the best so far, yet it isn't bulletproof since images/c/cf/ (or any other dir/subdir pair) could still contain up to 16^30 unique hashes and at least 3 times more files if we count image extensions, a lot more than any regular file system can handle.
AFAIK, SourceForge.net also uses this system for project repositories, for instance the "fatfree" project would be placed at projects/f/fa/fatfree/, however I believe they limit project names to 8 chars.
I would store the image hash in the database along with a DATE / DATETIME / TIMESTAMP field indicating when the image was uploaded / processed and then place the image in a structure like this:
images/
2010/ - Year
04/ - Month
19/ - Day
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Or:
images/
2010/ - Year
0419/ - Month & Day (12 * 31 = 372)
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Besides being more descriptive, this structure is enough to host hundreds of thousands (depending on your file system limits) of images per day for several thousand years, this is the way Wordpress and others do it, and I think they got it right on this one.
Duplicated images could be easily queried on the database and you'd just have to create symlinks.
Of course, if this is not enough for you, you can always add more subdirs (hours, minutes, ...).
Personally I wouldn't use user IDs unless you don't have that info available in your database, because:
Disclosure of usernames in the URL
Usernames are volatile (you may be able to rename folders, but still...)
A user can hypothetically upload a large number of images
Serves no purpose (?)
Regarding the CDN I don't see any reason this scheme (or any other) wouldn't work...
MediaWiki generates the MD5 sum of the name of the uploaded file, and uses the first two letters of the MD5 (say, "c" and "f" of the sum "cf1e66b77918167a6b6b972c12b1c00d") to create this directory structure:
images/c/cf/Whatever_filename.png
You could also use the image ID for a predictable upper limit on the number of files per directory. Maybe take floor(image unique ID / 1000) to determine the parent directory, for 1000 images per directory.
Yes, yes I know this is an ancient topic. But the problem to store large amount of images and how the underlying folder structure should be organized. So I present my way to handle it in the hope this might help some people.
The idea using md5 hash is the best way to handle massive image storage. Keeping in mind that different values might have the same hash I strongly suggest to add also the user id or nicname to the path to make it unique. Yep that's all what's needed. If someone has different users with the same database id - well, there is something wrong ;) So root_path/md5_hash/user_id is everything you need to do it properly.
Using DATE / DATETIME / TIMESTAMP is not the optimal solution by the way IMO. You end up with big clusters of image folders on a buisy day and nearly empty ones on less frequented ones. Not sure this leads to performance problems but there is something like data aesthetics and a consistent data distribution is always superior.
So I clearly go for the hash solution.
I wrote the following function to make it easy to generate such hash based storage paths. Feel free to use it if you like it.
/**
* Generates directory path using $user_id md5 hash for massive image storing
* #author Hexodus
* #param string $user_id numeric user id
* #param string $user_root_raw root directory string
* #return null|string
*/
function getUserImagePath($user_id = null, $user_root_raw = "images/users", $padding_length = 16,
$split_length = 3, $hash_length = 12, $hide_leftover = true)
{
// our db user_id should be nummeric
if (!is_numeric($user_id))
return null;
// clean trailing slashes
$user_root_rtrim = rtrim( $user_root_raw, '/\\' );
$user_root_ltrim = ltrim( $user_root_rtrim, '/\\' );
$user_root = $user_root_ltrim;
$user_id_padded = str_pad($user_id, $padding_length, "0", STR_PAD_LEFT); //pad it with zeros
$user_hash = md5($user_id); // build md5 hash
$user_hash_partial = $hash_length >=1 && $hash_length < 32
? substr($user_hash, 0, $hash_length) : $user_hash;
$user_hash_leftover = $user_hash_partial <= 32 ? substr($user_hash, $hash_length, 32) : null;
$user_hash_splitted = str_split($user_hash_partial, $split_length); //split in chunks
$user_hash_imploded = implode($user_hash_splitted,"/"); //glue aray chunks with slashes
if ($hide_leftover || !$user_hash_leftover)
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_id_padded}"; //build final path
else
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_hash_leftover}/{$user_id_padded}"; //build final path plus leftover
return $user_image_path;
}
Function test calls:
$user_id = "1394";
$user_root = "images/users";
$user_hash = md5($user_id);
$path_sample_basic = getUserImagePath($user_id);
$path_sample_advanced = getUserImagePath($user_id, "images/users", 8, 4, 12, false);
echo "<pre>hash: {$user_hash}</pre>";
echo "<pre>basic:<br>{$path_sample_basic}</pre>";
echo "<pre>customized:<br>{$path_sample_advanced}</pre>";
echo "<br><br>";
The resulting output - colorized for your convenience ;):
Have you thought about using something like Amazon S3 to store the files? I run a photo hosting company and after quickly reaching limits on our own server, we switched over to AmazonS3. The beauty of S3 is that there are no limits like inodes and what not, you just keep throwing files at it.
Also: If you don't like S3, you can always try and break it down into subfolders as much as you can:
/userid/year/month/day/photoid.jpg
You can convert a username to md5 and set a folder from 2-3 first letters of md5 converted username for the avatars and for images you can convert and playing with time , random strings , ids and names
8648b8f3ce06a7cc57cf6fb931c91c55 - devcline
Also a first letter of the username or id for the next folder or inverse
It will look like
Structure:
stream/img/86/8b8f3ce06a7cc57cf6fb931c91c55.png //simplest
stream/img/d/2/0bbb630d63262dd66d2fdde8661a410075.png //first letter and id folders
stream/img/864/d/8b8f3ce06a7cc57cf6fb931c91c55.png // with first letter of the nick
stream/img/864/2/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id
stream/img/2864/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in 3 letters
stream/img/864/2_8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in picture name
Code
$username = substr($username_md5, 1); // to cut first letter from the md5 converted nick
$username_first = $username[0]; // the first letter
$username_md5 = md5($username); // md5 for username
$randomname = uniqid($userid).md5(time()); //for generate a random name based on ID
you can try also with base64
$image_encode = strtr(base64_encode($imagename), '+/=', '-_,');
$image_decode = base64_decode(strtr($imagename, '-_,', '+/='));
Steam And dokuwiki use this structure.
You might consider the open source http://danga.com/mogilefs/ as it is perfect for what you're doing. It'll take you from thinking about folders to namespaces (which could be users) and let it store you images for you. The best part is you don't have to care how the data is stored. It makes it completely redundant and you can even set controls around how redundant thumbnails are as well.
I got soultion im using for a long time. It's quite old code, and can be further optimised, but it still serves good as it is.
It's a immutable function creating directory structure based on:
Number that identifies image (FILE ID):
it's recommended that this numer is unique for base directory, like primary key for database table, but it's not required.
The base directory
The maximum desired number of files and first level subdirectories. This promised can be kept only if every FILE ID is unique.
Example of usage:
Using explicitly FILE ID:
$fileName = 'my_image_05464hdfgf.jpg';
$fileId = 65347;
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileId );
$targetDir = $baseDir . $clusteredDir;
$targetPath = $targetDir . $fileName;
$targetURL = $baseURL . $clusteredDir . $fileName;
Using file name, number = crc32( filename )
$fileName = 'my_image_05464hdfgf.jpg';
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileName );
$targetDir = $baseDir . $clusteredDir;
$targetURL = $baseURL . $clusteredDir . $fileName;
Code:
class DirCluster {
/**
* #param mixed $fileId - numeric FILE ID or file name
* #param int $maxFiles - max files in one dir
* #param int $maxDirs - max 1st lvl subdirs in one dir
* #param boolean $createDirs - create dirs?
* #param string $path - base path used when creatign dirs
* #return boolean|string
*/
public static function getClusterDir($fileId, $maxFiles = 100, $maxDirs = 10,
$createDirs = false, $path = "") {
// Value for return
$rt = '';
// If $fileId is not numerci - lets create crc32
if (!is_numeric($fileId)) {
$fileId = crc32($fileId);
}
if ($fileId < 0) {
$fileId = abs($fileId);
}
if ($createDirs) {
if (!file_exists($path))
{
// Check out the rights - 0775 may be not the best for you
if (!mkdir($path, 0775)) {
return false;
}
#chmod($path, 0775);
}
}
if ( $fileId <= 0 || $fileId <= $maxFiles ) {
return $rt;
}
// Rest from dividing
$restId = $fileId%$maxFiles;
$formattedFileId = $fileId - $restId;
// How many directories is needed to place file
$howMuchDirs = $formattedFileId / $maxFiles;
while ($howMuchDirs > $maxDirs)
{
$r = $howMuchDirs%$maxDirs;
$howMuchDirs -= $r;
$howMuchDirs = $howMuchDirs/$maxDirs;
$rt .= $r . '/'; // DIRECTORY_SEPARATOR = /
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
}
$rt .= $howMuchDirs-1;
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
$rt .= '/'; // DIRECTORY_SEPARATOR
return $rt;
}
}