PO files translations: going backwords, from msgstr to msgid? - php

I'm in need to start from the msgstr to retrieve the msgid. The reason is that I've got several translations, all EN to SOME OTHER LANG, but now in the current installation I need to go back from SOME OTHER LANG to EN. Note that I'm also working in WordPress, maybe this is not important though. There are a couple of similar questions here but not exactly what I need.
So is there a way to accomplish this?

WordPress ships with a PO reader and writer as part of the pomo package. Below is a pretty simple script that swaps the msgid and msgstr fields around and writes out a new file.
As already pointed out in the comments, there are several things that make this potentially problematic:
Your target strings must all be unique (and not empty)
If you have message context, this will stay in the original language.
Your original language must have only two plural forms.
Onward -
<?php
require_once 'path/to/wp-includes/pomo/po.php';
$source_file = 'path/to/languages/old-file.po';
$target_file = 'path/to/languages/new-file.po';
// parse original message catalogue from source file
$source = new PO;
$source->import_from_file($source_file);
// prep target messages with a different language
$target = new PO;
$target->set_headers( $source->headers );
$target->set_header('Language', 'en_US');
$target->set_header('Language-Team', 'English from SOME OTHER LANG');
$target->set_header('Plural-Forms', 'nplurals=2; plural=n!=1;');
/* #var Translation_Entry $entry */
foreach( $source->entries as $entry ){
$reversed = clone $entry;
// swap msgid and msgstr (singular)
$reversed->singular = $entry->translations[0];
$reversed->translations[0] = $entry->singular;
// swap msgid_plural and msgstr[2] (plural)
if( $entry->is_plural ){
$reversed->plural = $entry->translations[1];
$reversed->translations[1] = $entry->plural;
}
// append target file with modified entry
$target->add_entry( $reversed );
}
// write final file back to disk
file_put_contents( $target_file, $target->export() );

Related

Read and replace contents in .docx (Word) file

I need to replace content in some word documents based on User input. I am trying to read a template file (e.g. "template.docx"), and replace First name {fname}, Address {address} etc.
template.docx:
To,
The Office,
{officeaddress}
Sub: Authorization Letter
Sir / Madam,
I/We hereby authorize to {Ename} whose signature is attested here below, to submit application and collect Residential permit for {name}
Kindly allow him to support our International assignee
{name} {Ename}
Is there a way to do the same in Laravel 5.3?
I am trying to do with phpword, but I can only see code to write new word files - but not read and replace existing ones. Also, when I simply read and write, the formatting is messed up.
Code:
$file = public_path('template.docx');
$phpWord = \PhpOffice\PhpWord\IOFactory::load($file);
$phpWord->save('b.docx');
b.docx
To,
The Office,
{officeaddress}
Sub:
Authorization Letter
Sir / Madam,
I/We hereby authorize
to
{Ename}
whose signature is attested here below, to submit a
pplication and collect Residential permit
for
{name}
Kindly allow him to support our International assignee
{name}
{
E
name}
This is the working version to #addweb-solution-pvt-ltd 's answer.
//This is the main document in Template.docx file.
$file = public_path('template.docx');
$phpword = new \PhpOffice\PhpWord\TemplateProcessor($file);
$phpword->setValue('{name}','Santosh');
$phpword->setValue('{lastname}','Achari');
$phpword->setValue('{officeAddress}','Yahoo');
$phpword->saveAs('edited.docx');
However, not all of the {name} fields are changing. Not sure why.
Alternatively:
// Creating the new document...
$zip = new \PhpOffice\PhpWord\Shared\ZipArchive();
//This is the main document in a .docx file.
$fileToModify = 'word/document.xml';
$file = public_path('template.docx');
$temp_file = storage_path('/app/'.date('Ymdhis').'.docx');
copy($template,$temp_file);
if ($zip->open($temp_file) === TRUE) {
//Read contents into memory
$oldContents = $zip->getFromName($fileToModify);
echo $oldContents;
//Modify contents:
$newContents = str_replace('{officeaddqress}', 'Yahoo \n World', $oldContents);
$newContents = str_replace('{name}', 'Santosh Achari', $newContents);
//Delete the old...
$zip->deleteName($fileToModify);
//Write the new...
$zip->addFromString($fileToModify, $newContents);
//And write back to the filesystem.
$return =$zip->close();
If ($return==TRUE){
echo "Success!";
}
} else {
echo 'failed';
}
Works well. Still trying to figure how to save it as a new file and force a download.
I have same task to edit .doc or .docx file in php, i have use this code for it.
Reference : http://www.onlinecode.org/update-docx-file-using-php/
$full_path = 'template.docx';
//Copy the Template file to the Result Directory
copy($template_file_name, $full_path);
// add calss Zip Archive
$zip_val = new ZipArchive;
//Docx file is nothing but a zip file. Open this Zip File
if($zip_val->open($full_path) == true)
{
// In the Open XML Wordprocessing format content is stored.
// In the document.xml file located in the word directory.
$key_file_name = 'word/document.xml';
$message = $zip_val->getFromName($key_file_name);
$timestamp = date('d-M-Y H:i:s');
// this data Replace the placeholders with actual values
$message = str_replace("{officeaddress}", "onlinecode org", $message);
$message = str_replace("{Ename}", "ingo#onlinecode.org", $message);
$message = str_replace("{name}", "www.onlinecode.org", $message);
//Replace the content with the new content created above.
$zip_val->addFromString($key_file_name, $message);
$zip_val->close();
}
To read and replace content from Doc file, you can use PHPWord package and download this package using composer command:
composer require phpoffice/phpword
As per version v0.12.1, you need to require the PHP Word Autoloader.php from src/PHPWord folder and register it
require_once 'src/PhpWord/Autoloader.php';
\PhpOffice\PhpWord\Autoloader::register();
1) Open document
$template = new \PhpOffice\PhpWord\TemplateProcessor('YOURDOCPATH');
2) Replace string variables for single
$template->setValue('variableName', 'MyVariableValue');
3) Replace string variables for multi occurrence
- Clone your array placeholder to the count of your array
$template->cloneRow('arrayName', count($array));
- Replace variable value
for($number = 0; $number < count($array); $number++) {
$template->setValue('arrayName#'.($number+1), htmlspecialchars($array[$number], ENT_COMPAT, 'UTF-8'));
}
4) Save the changed document
$template->saveAs('PATHTOUPDATED.docx');
UPDATE
You can pass limit as third parameter into $template->setValue($search, $replace, $limit) to specifies how many matches should take place.
If you find simple solution you can use this library
Example:
This code will replace $search to $replace in $pathToDocx file
$docx = new IRebega\DocxReplacer($pathToDocx);
$docx->replaceText($search, $replace);
Library phpoffice/phpword working is ok.
For correct working you must use the right symbols in your Word document, like that:
${name}
${lastname}
${officeAddress}
and for method "setValue" you need to use only names, like:
'name'
'lastname'
'officeAddress'
Very good working within Laravel, Lumen, and other frameworks
Example:
//This is the main document in Template.docx file.
$file = public_path('template.docx');
$phpword = new \PhpOffice\PhpWord\TemplateProcessor($file);
$phpword->setValue('name','Santosh');
$phpword->setValue('lastname','Achari');
$phpword->setValue('officeAddress','Yahoo');
$phpword->saveAs('edited.docx');

Writing to a file adds weird content at the end of the line

I am working on a program that parses text files uploaded by a user and then saves the parsed XML file on the server. However, when I write the XML file I get some the text
at the end of each line. This text is not in my original text file. I didn't even notice it until I opened the new XML file to verify that it was righting all of the content. Has anyone ran into this before and if so can you tell me if it's due to the way I'm creating and writing my file?
fileUpload.php - These 3 lines occur when the user uploads the file.
$fileName = basename($_FILES['fileaddress']['name']);
$fileContents = file_get_contents($_FILES['fileaddress']['tmp_name']);
$xml = $parser->parseUnformattedText($fileContents);
$parsedFileName = pathinfo($fileName, PATHINFO_FILENAME) . ".xml";
file_put_contents($parsedFileName, $xml);
parser.php
function parseUnformattedText($inputText, $bookName = "")
{
//create book, clause, text nodes
$book = new SimpleXmlElement("<book></book>");
$book->addAttribute("bookName", $bookName);
$conj = $book->addChild("conj", "X");
$clause = $book->addChild("clause");
$trimmedText = $this->trimNewLines($inputText);
$trimmedText = $this->trimSpaces($inputText);
$text = $clause->addChild("text", $trimmedText);
$this->addChapterVerse($text, "", "");
//make list of pconj's for beginning of file
$pconjs = $this->getPconjList();
//convert the xml to string
$xml = $book->asXml();
//combine the list of pconj's and xml string
$xml = "$pconjs\n$xml";
return $xml;
}
Input text file
1:1 X
it seemed good to me also,
X
having had perfect understanding of all things from the very first
to write you an orderly account, [most] excellent Theophilius
and
1:4
that
you may know the certainty of those things in which you were instructed
1:5 X
There was in the days of Herod, the king of Judea and a certain priest named Zacharias
X
his wife[was] of the daughters of Aaron
and
her name [was] Elizabeth.
1:8 So
it was,
that
while he was serving as priest 1:9 before God in the order of his division,
1:10 and
the whole multitude of the people was praying outside at the hour of incense
but
therefore
it was done.
Going off of Seroczynski's answer I was able to create a function that trimmed removed any carriage returns from the text. The XML output looked fine after that. Here's the function I used to fix the issue:
function trimCarriageReturns($text)
{
$textOut = str_replace("\r", "\n", $text);
$textOut = str_replace("\n\n", "\n", $textOut);
return $textOut;
}
is the ASCII character for \r\n which doesn't seem to come out correctly from parseUnformattedText().
Try $xml = nl2br($parser->parseUnformattedText($fileContents));

How to merge docx documents in PHP?

Does anyone know how to merge (concatenate) docx documents with PHP (or Python if it's not possible in PHP)?
To clarify, my server is Linux based. I have 2 existing docx document, I need to put them in a new docx document using PHP or possibly Python.
Merging two different Docx files may be very complicated because Headers, Styles, Charts, Comments, User Modification Traces and other special contents are saved in separate inner XML sub-files in each Docx. Thus, two Docx may have different objects having the same ids. So it would be a very huge job to list all possible objects in the two documents, give them new inner ids, and re-affect them in a single one. Probably only Ms Office can do this currently.
Nevertheless, if you know that your two documents to be merged have the same styles, and if you know you have no charts, headers and other special objects, then the merging becomes something quite easy to perform.
In this case, you only have to use a Zip reader, such as TbsZip, to open the first Docx file (which is technically a zip archive containing XML sub-files) ; then read the sub-file "word/document.xml" and extract the part which is between the tags < w:body >
and < /w:body >.
In the second Docx file, open the "word/content.xml" and insert the previous content just before the tag < /w:body >. Save the result in a new Docx file.
This can be done using TbsZip, like this :
<?php
include_once('tbszip.php');
$zip = new clsTbsZip();
// Open the first document
$zip->Open('doc1.docx');
$content1 = $zip->FileRead('word/document.xml');
$zip->Close();
// Extract the content of the first document
$p = strpos($content1, '<w:body');
if ($p===false) exit("Tag <w:body> not found in document 1.");
$p = strpos($content1, '>', $p);
$content1 = substr($content1, $p+1);
$p = strpos($content1, '</w:body>');
if ($p===false) exit("Tag </w:body> not found in document 1.");
$content1 = substr($content1, 0, $p);
// Insert into the second document
$zip->Open('doc2.docx');
$content2 = $zip->FileRead('word/document.xml');
$p = strpos($content2, '</w:body>');
if ($p===false) exit("Tag </w:body> not found in document 2.");
$content2 = substr_replace($content2, $content1, $p, 0);
$zip->FileReplace('word/document.xml', $content2, TBSZIP_STRING);
// Save the merge into a third file
$zip->Flush(TBSZIP_FILE, 'merge.docx');
You may merge two Word documents with PHPDocX with a single line of code: (Source: Merging Word documents with PHPDocX)
require_once 'path /classes/DocxUtilities.inc';
$newDocx = new DocxUtilities();
$myOptions = array('mergeType' => 0);
$newDocx->mergeDocx('firstWordDoc.docx', 'secondWordDoc.docx', 'mergedWord.docx',
$myOptions);
This merging let you preserve all section structure (paper size, margins, associated footers and headers,...), includes all the required styles, manages all lists (this may seem trivial but it is not so in the OOXML standard), preserves images and charts as well as footnotes, endnotes and comments.
Moreover there is an option to preserve the original numberings (by default the page numbering continues).
One also may, via the mergeType option, to discard the section structure of the merged document and add it at the end of the first document as part of its last section. In this case, of course, the headers and footers are not imported but all other elements are still preserved.
Aspose.Words Cloud SDK for PHP can merge/join several Word Documents into a one Word document while keeping the formatting of appended or destination document depending upon the ImportFormatMode parameter value. Secondly, it is a commercial API but the free pricing plan allows 150 free monthly API Calls.
<?php
require_once('D:\xampp\htdocs\aspose-words-cloud-php-master\vendor\autoload.php');
//TODO: Get your ClientId and ClientSecret at https://dashboard.aspose.cloud (free registration is required).
$ClientSecret="xxxxxxxxxxxxxxxxxxxxxxxxxxxx";
$ClientId="xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx";
$wordsApi = new Aspose\Words\WordsApi($ClientId,$ClientSecret);
try {
$remoteDataFolder = "Temp";
$localFile = "C:/Temp/02_pages_adobe.docx";
$remoteFileName = "02_pages_adobe.docx";
$localFile1 = "C:/Temp/Sections.docx";
$remoteFileName1 = "Sections.docx";
$outputFileName = "TestAppendDocument.docx";
$uploadRequest = new Aspose\Words\Model\Requests\UploadFileRequest($localFile,$remoteDataFolder."/".$remoteFileName,null);
$wordsApi->uploadFile($uploadRequest);
$uploadRequest1 = new Aspose\Words\Model\Requests\UploadFileRequest($localFile1,$remoteDataFolder."/".$remoteFileName1,null);
$wordsApi->uploadFile($uploadRequest1);
$requestDocumentListDocumentEntries0 = new Aspose\Words\Model\DocumentEntry(array(
"href" => $remoteDataFolder . "/" . $remoteFileName1,
"import_format_mode" => "KeepSourceFormatting",
));
$requestDocumentListDocumentEntries = [
$requestDocumentListDocumentEntries0,
];
$requestDocumentList = new Aspose\Words\Model\DocumentEntryList(array(
"document_entries" => $requestDocumentListDocumentEntries,
));
$request = new Aspose\Words\Model\Requests\AppendDocumentRequest(
$remoteFileName,
$requestDocumentList,
$remoteDataFolder,
NULL,
NULL,
NULL,
$remoteDataFolder . "/" . $outputFileName,
NULL,
NULL
);
$result = $wordsApi->appendDocument($request);
##Download file
$request = new Aspose\Words\Model\Requests\DownloadFileRequest($remoteDataFolder."/".$outputFileName,NULL,NULL);
$result = $wordsApi->downloadFile($request);
copy($result->getPathName(),"AppendOutput.docx");
} catch (Exception $e) {
echo "Something went wrong: ", $e->getMessage(), "\n";
PHP_EOL;
}
?>
P.S: I'm developer evangelist at Aspose.

Create Files Automatically using PHP script

I have a project that needs to create files using the fwrite in php. What I want to do is to make it generic, I want to make each file unique and dont overwrite on the others.
I am creating a project that will record the text from a php form and save it as html, so I want to output to have generated-file1.html and generated-file2.html, etc.. Thank you.
This will give you a count of the number of html files in a given directory
$filecount = count(glob("/Path/to/your/files/*.html"));
and then your new filename will be something like:
$generated_file_name = "generated-file".($filecount+1).".html";
and then fwrite using $generated_file_name
Although I've had to do a similar thing recently and used uniq instead. Like this:
$generated_file_name = md5(uniqid(mt_rand(), true)).".html";
I would suggest using the time as the first part of the filename (as that should then result in files being listed in chronological/alphabetic order, and then borrow from #TomcatExodus to improve the chances of the filename being unique (incase of two submissions being simultaneous).
<?php
$data = $_POST;
$md5 = md5( $data );
$time = time();
$filename_prefix = 'generated_file';
$filename_extn = 'htm';
$filename = $filename_prefix.'-'.$time.'-'.$md5.'.'.$filename_extn;
if( file_exists( $filename ) ){
# EXTREMELY UNLIKELY, unless two forms with the same content and at the same time are submitted
$filename = $filename_prefix.'-'.$time.'-'.$md5.'-'.uniqid().'.'.$filename_extn;
# IMPROBABLE that this will clash now...
}
if( file_exists( $filename ) ){
# Handle the Error Condition
}else{
file_put_contents( $filename , 'Whatever the File Content Should Be...' );
}
This would produce filenames like:
generated_file-1300080525-46ea0d5b246d2841744c26f72a86fc29.htm
generated_file-1300092315-5d350416626ab6bd2868aa84fe10f70c.htm
generated_file-1300109456-77eae508ae79df1ba5e2b2ada645e2ee.htm
If you want to make absolutely sure that you will not overwrite an existing file you could append a uniqid() to the filename. If you want it to be sequential you'll have to read existing files from your filesystem and calculate the next increment which can result in an IO overhead.
I'd go with the uniqid() method :)
If your implementation should result in unique form results every time (therefore unique files) you could hash form data into a filename, giving you unique paths, as well as the opportunity to quickly sort out duplicates;
// capture all posted form data into an array
// validate and sanitize as necessary
$data = $_POST;
// hash data for filename
$fname = md5(serialize($data));
$fpath = 'path/to/dir/' . $fname . '.html';
if(!file_exists($fpath)){
//write data to $fpath
}
Do something like this:
$i = 0;
while (file_exists("file-".$i.".html")) {
$i++;
}
$file = fopen("file-".$i.".html");

Image upload storage strategies

When a user uploads an image to my site, the image goes through this process;
user uploads pic
store pic metadata in db, giving the image a unique id
async image processing (thumbnail creation, cropping, etc)
all images are stored in the same uploads folder
So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.
EDIT:
Creating username (or more specifically, userid) subfolders would seem to be a good solution. With a bit more digging, I've found some great info right here; How to store images in your filesystem
However, would this userid dir approach scale well if a CDN is bought into the equation?
I've answered a similar question before but I can't find it, maybe the OP deleted his question...
Anyway, Adams solution seems to be the best so far, yet it isn't bulletproof since images/c/cf/ (or any other dir/subdir pair) could still contain up to 16^30 unique hashes and at least 3 times more files if we count image extensions, a lot more than any regular file system can handle.
AFAIK, SourceForge.net also uses this system for project repositories, for instance the "fatfree" project would be placed at projects/f/fa/fatfree/, however I believe they limit project names to 8 chars.
I would store the image hash in the database along with a DATE / DATETIME / TIMESTAMP field indicating when the image was uploaded / processed and then place the image in a structure like this:
images/
2010/ - Year
04/ - Month
19/ - Day
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Or:
images/
2010/ - Year
0419/ - Month & Day (12 * 31 = 372)
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Besides being more descriptive, this structure is enough to host hundreds of thousands (depending on your file system limits) of images per day for several thousand years, this is the way Wordpress and others do it, and I think they got it right on this one.
Duplicated images could be easily queried on the database and you'd just have to create symlinks.
Of course, if this is not enough for you, you can always add more subdirs (hours, minutes, ...).
Personally I wouldn't use user IDs unless you don't have that info available in your database, because:
Disclosure of usernames in the URL
Usernames are volatile (you may be able to rename folders, but still...)
A user can hypothetically upload a large number of images
Serves no purpose (?)
Regarding the CDN I don't see any reason this scheme (or any other) wouldn't work...
MediaWiki generates the MD5 sum of the name of the uploaded file, and uses the first two letters of the MD5 (say, "c" and "f" of the sum "cf1e66b77918167a6b6b972c12b1c00d") to create this directory structure:
images/c/cf/Whatever_filename.png
You could also use the image ID for a predictable upper limit on the number of files per directory. Maybe take floor(image unique ID / 1000) to determine the parent directory, for 1000 images per directory.
Yes, yes I know this is an ancient topic. But the problem to store large amount of images and how the underlying folder structure should be organized. So I present my way to handle it in the hope this might help some people.
The idea using md5 hash is the best way to handle massive image storage. Keeping in mind that different values might have the same hash I strongly suggest to add also the user id or nicname to the path to make it unique. Yep that's all what's needed. If someone has different users with the same database id - well, there is something wrong ;) So root_path/md5_hash/user_id is everything you need to do it properly.
Using DATE / DATETIME / TIMESTAMP is not the optimal solution by the way IMO. You end up with big clusters of image folders on a buisy day and nearly empty ones on less frequented ones. Not sure this leads to performance problems but there is something like data aesthetics and a consistent data distribution is always superior.
So I clearly go for the hash solution.
I wrote the following function to make it easy to generate such hash based storage paths. Feel free to use it if you like it.
/**
* Generates directory path using $user_id md5 hash for massive image storing
* #author Hexodus
* #param string $user_id numeric user id
* #param string $user_root_raw root directory string
* #return null|string
*/
function getUserImagePath($user_id = null, $user_root_raw = "images/users", $padding_length = 16,
$split_length = 3, $hash_length = 12, $hide_leftover = true)
{
// our db user_id should be nummeric
if (!is_numeric($user_id))
return null;
// clean trailing slashes
$user_root_rtrim = rtrim( $user_root_raw, '/\\' );
$user_root_ltrim = ltrim( $user_root_rtrim, '/\\' );
$user_root = $user_root_ltrim;
$user_id_padded = str_pad($user_id, $padding_length, "0", STR_PAD_LEFT); //pad it with zeros
$user_hash = md5($user_id); // build md5 hash
$user_hash_partial = $hash_length >=1 && $hash_length < 32
? substr($user_hash, 0, $hash_length) : $user_hash;
$user_hash_leftover = $user_hash_partial <= 32 ? substr($user_hash, $hash_length, 32) : null;
$user_hash_splitted = str_split($user_hash_partial, $split_length); //split in chunks
$user_hash_imploded = implode($user_hash_splitted,"/"); //glue aray chunks with slashes
if ($hide_leftover || !$user_hash_leftover)
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_id_padded}"; //build final path
else
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_hash_leftover}/{$user_id_padded}"; //build final path plus leftover
return $user_image_path;
}
Function test calls:
$user_id = "1394";
$user_root = "images/users";
$user_hash = md5($user_id);
$path_sample_basic = getUserImagePath($user_id);
$path_sample_advanced = getUserImagePath($user_id, "images/users", 8, 4, 12, false);
echo "<pre>hash: {$user_hash}</pre>";
echo "<pre>basic:<br>{$path_sample_basic}</pre>";
echo "<pre>customized:<br>{$path_sample_advanced}</pre>";
echo "<br><br>";
The resulting output - colorized for your convenience ;):
Have you thought about using something like Amazon S3 to store the files? I run a photo hosting company and after quickly reaching limits on our own server, we switched over to AmazonS3. The beauty of S3 is that there are no limits like inodes and what not, you just keep throwing files at it.
Also: If you don't like S3, you can always try and break it down into subfolders as much as you can:
/userid/year/month/day/photoid.jpg
You can convert a username to md5 and set a folder from 2-3 first letters of md5 converted username for the avatars and for images you can convert and playing with time , random strings , ids and names
8648b8f3ce06a7cc57cf6fb931c91c55 - devcline
Also a first letter of the username or id for the next folder or inverse
It will look like
Structure:
stream/img/86/8b8f3ce06a7cc57cf6fb931c91c55.png //simplest
stream/img/d/2/0bbb630d63262dd66d2fdde8661a410075.png //first letter and id folders
stream/img/864/d/8b8f3ce06a7cc57cf6fb931c91c55.png // with first letter of the nick
stream/img/864/2/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id
stream/img/2864/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in 3 letters
stream/img/864/2_8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in picture name
Code
$username = substr($username_md5, 1); // to cut first letter from the md5 converted nick
$username_first = $username[0]; // the first letter
$username_md5 = md5($username); // md5 for username
$randomname = uniqid($userid).md5(time()); //for generate a random name based on ID
you can try also with base64
$image_encode = strtr(base64_encode($imagename), '+/=', '-_,');
$image_decode = base64_decode(strtr($imagename, '-_,', '+/='));
Steam And dokuwiki use this structure.
You might consider the open source http://danga.com/mogilefs/ as it is perfect for what you're doing. It'll take you from thinking about folders to namespaces (which could be users) and let it store you images for you. The best part is you don't have to care how the data is stored. It makes it completely redundant and you can even set controls around how redundant thumbnails are as well.
I got soultion im using for a long time. It's quite old code, and can be further optimised, but it still serves good as it is.
It's a immutable function creating directory structure based on:
Number that identifies image (FILE ID):
it's recommended that this numer is unique for base directory, like primary key for database table, but it's not required.
The base directory
The maximum desired number of files and first level subdirectories. This promised can be kept only if every FILE ID is unique.
Example of usage:
Using explicitly FILE ID:
$fileName = 'my_image_05464hdfgf.jpg';
$fileId = 65347;
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileId );
$targetDir = $baseDir . $clusteredDir;
$targetPath = $targetDir . $fileName;
$targetURL = $baseURL . $clusteredDir . $fileName;
Using file name, number = crc32( filename )
$fileName = 'my_image_05464hdfgf.jpg';
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileName );
$targetDir = $baseDir . $clusteredDir;
$targetURL = $baseURL . $clusteredDir . $fileName;
Code:
class DirCluster {
/**
* #param mixed $fileId - numeric FILE ID or file name
* #param int $maxFiles - max files in one dir
* #param int $maxDirs - max 1st lvl subdirs in one dir
* #param boolean $createDirs - create dirs?
* #param string $path - base path used when creatign dirs
* #return boolean|string
*/
public static function getClusterDir($fileId, $maxFiles = 100, $maxDirs = 10,
$createDirs = false, $path = "") {
// Value for return
$rt = '';
// If $fileId is not numerci - lets create crc32
if (!is_numeric($fileId)) {
$fileId = crc32($fileId);
}
if ($fileId < 0) {
$fileId = abs($fileId);
}
if ($createDirs) {
if (!file_exists($path))
{
// Check out the rights - 0775 may be not the best for you
if (!mkdir($path, 0775)) {
return false;
}
#chmod($path, 0775);
}
}
if ( $fileId <= 0 || $fileId <= $maxFiles ) {
return $rt;
}
// Rest from dividing
$restId = $fileId%$maxFiles;
$formattedFileId = $fileId - $restId;
// How many directories is needed to place file
$howMuchDirs = $formattedFileId / $maxFiles;
while ($howMuchDirs > $maxDirs)
{
$r = $howMuchDirs%$maxDirs;
$howMuchDirs -= $r;
$howMuchDirs = $howMuchDirs/$maxDirs;
$rt .= $r . '/'; // DIRECTORY_SEPARATOR = /
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
}
$rt .= $howMuchDirs-1;
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
$rt .= '/'; // DIRECTORY_SEPARATOR
return $rt;
}
}

Categories