Is it possible to combine ImageNow with Javascript and php? - php

I currently have an internal site for my company where our customer support users will be uploading files from our clients. Originally, I had planned on using the help of mysql and a protected, shared network folder and used mysql to hold the filename and path. However, we also utilize ImageNow for other processes. Does anyone know if ImageNow works with javascript and php outside of the software itself? I'm new to ImageNow so any advice is appreciated.

logObArray = getDocLogobArray(workingDoc);
for (var i=0; i<logObArray.length; i++)
{
var docObj = logObArray[i];
var filePath = docObj.filePath;
var fileType = docObj.fileType;
var ftToCheck = fileType.toUpperCase();
var phsobID = docObj.phsobId;
//write OSM info to the file, you'll have to add the other code around this but premise is correct and tested
var outRow = filePath;
outRow = outRow + '\n';
if (Clib.fputs(outRow, outCsvFP) >= 0)
{
debug.log('DEBUG', 'Wrote OSM Path [%s] to file successfully.\n', filePath);
stats.inc('Wrote OSM Path to file');
}
}

Unfortunately, ImageNow doesn't let you get at the information it stores outside of Perceptive Software provided tools. Even if you dig directly into the SQL database and look at the filesystem where it is storing the files, you can't get the information out. ImageNow stores the files unencrypted on the filesystem, so that's fine, and it stores the metadata for those images in easy to search tables in the database. However, the path from the metadata to the filesystem it encrypts before it stores it in the database. So if you are trying to go from the metadata to the images, the farthest along you can get is to the encrypted path. Without the decryption key, you can't get to the images.
However, there is a way you can write code to use ImageNow data. You need the add-on Message Agent - which you need to purchase from Perceptive. That opens up interfaces for using web services and SOAP to get at the ImageNow data.

This is the complete solution for this. It gets the root file and subsequent pages. All other solutions I've found do not get anything other than the first page of the scanned document. Change your drawer to your own drawer name (btw). I hope this helps someone. Companies that lock down people's content really make me mad. Just use the intool.exe utility. It's located in the /bin folder of your installation. The call is: intool --cmd run-iscript --file yourfile.js
var curDocId = 0;
var more = true;
// printf("curDocId : %s\n", curDocId );
while (more) {
var rulestext = "[drawer] = 'AR' AND [docID] > '" + curDocId + "'";
var items = INDocManager.getDocumentsByVslQuery(rulestext, 1000, more, "DOCUMENT_ID");
var start = items[0];
var dataDesc = new Array();
var headerDelim = "\03"
var dataDelim = "\02";
for (var line=1; line <= start; line++) {
var temp = items[line].split(headerDelim);
dataDesc[temp[1].toUpperCase()] = new Object();
dataDesc[temp[1].toUpperCase()].idx = line - 1;
dataDesc[temp[1].toUpperCase()].name = temp[1];
dataDesc[temp[1].toUpperCase()].datatype = temp[2];
}
for ( ; line < items.length; line++) {
var doc = new INDocument(items[line].split(dataDelim)[dataDesc["DOCUMENT ID"].idx]);
doc.id = items[line].split(dataDelim)[dataDesc["DOCUMENT ID"].idx];
doc.getInfo();
var masterDocId = doc.id;
var itCounter = 150;
var i = 1;
for( ; i <= itCounter; i++)
{
doc.getInfo();
var logob = INLogicalObject( doc.id, -1, i );
logob.retrieveObject();
if(logob && logob.logobCount > 0)
{
var fp = Clib.fopen("c:\\inowoutput.txt", "a");
var line = masterDocId + ',' + logob.id + ',' + logob.workingName + ',' + logob.filePath + '\n';
Clib.fputs(line, fp);
Clib.fclose(fp);
}
else
{
break;
}
}
curDocId = doc.id;
}
//printf("curDocId : %s\n", curDocId );
}

ImageNow has a scripting language that lets you get past the encrypted file path in the database. The file path is available in an undocumented member of the INLogicalObject. Details below for accessing taken from the following blog article. Accessing the encrypted file path in ImageNow
A search of the ImageNow 6.x specific object documentation will find that the INLogicalObject provides information about the actual files stored in the file system. However, it does not contain any information about the file path. A little closer inspection under the hood of the object reveals that it does have a file path field and the value is not encrypted. It is a member of INLogicalObject. The following very simple example shows finding a single document and displaying its file type and unencrypted file path on the console.
// get a single document
var results = INDocManager.getDocumentsBySqlQuery( "", 1, var more );
if ( results )
{
var doc = results[0];
doc.getInfo();
// get a single page for the document
var logob = INLogicalObject( doc.id, -1, 1 );
logob.retrieveObject();
printf( "file type: %s\n", logob.filetype ); // this member is in the documentation
printf( "unencrypted file path: %s\n", logob.filepath ); // this member is not in the documentation
}

Check out External Messaging Agent (EMA) functionality in ImageNow. It's a free module that is available in every installation.
EMA allows you to receive data from outside the ImageNow system (e.g. from a PHP web form, for example).
To use EMA, you merely would need to have the PHP script insert into the IN_EXTERN_MSG and IN_EXTERN_MSG_PROP tables. One of the properties could be the location of the file that was uploaded via PHP.
You'd then need an iScript to parse the data from the EMA tables and create a document in ImageNow.
I've built a solution like this before, and it works pretty well.

Related

php scraper scripts need to be changed

this script harvests links out of a seed url and only prints them in command shell (or browser) rather than saving elsewhere. I want the script to store any outputs in .txt file within the folder where the script resides. I need suggestions what could be the efficient way to do that. Please give me hints.
<?php
# Initialization
include("LIB_http.php"); // http library
include("LIB_parse.php"); // parse library
include("LIB_resolve_addresses.php"); // address resolution library
include("LIB_exclusion_list.php"); // list of excluded keywords
include("LIB_simple_spider.php"); // spider routines used by this app.
set_time_limit(3600); // Don't let PHP timeout
$SEED_URL = "http://www.schrenk.com"; // First URL spider downloads
$MAX_PENETRATION = 1; // Set spider penetration depth
$FETCH_DELAY = 1; // Wait one second between page fetches
$ALLOW_OFFISTE = false; // Don't allow spider to roam from the SEED_URL's domain
$spider_array = array();
# Get links from $SEED_URL
echo "Harvesting Seed URL \n";
$temp_link_array = harvest_links($SEED_URL);
$spider_array = archive_links($spider_array, 0, $temp_link_array);
# Spider links in remaining penetration levels
for($penetration_level=1; $penetration_level<=$MAX_PENETRATION; $penetration_level++)
{
$previous_level = $penetration_level - 1;
for($xx=0; $xx<count($spider_array[$previous_level]); $xx++)
{
unset($temp_link_array);
$temp_link_array = harvest_links($spider_array[$previous_level][$xx]);
echo "Level=$penetration_level, xx=$xx of ".count($spider_array[$previous_level])." <br>\n";
$spider_array = archive_links($spider_array, $penetration_level, $temp_link_array);
}
}
?>
Use file_put_contents PHP function with enable append file flag.
$file = 'file_name.txt';
file_put_contents($file, $text_to_write_to_file, FILE_APPEND);
Ref: http://www.php.net/manual/en/function.file-put-contents.php
I would recommend first creating a variable to store the output in the script. So at the top (under the $spider_array=array() ) add:
$output = "";
The change all the lines with echo to be $output .=
This will store all the content sent to the screen or the browser into the $output variable.
Now at the bottom of the script, after everything has been scraped and the spider is finished, save the output to a file:
$filename = date('Y_m_d_H_i_s') . '.txt';
$filepath = dirname(__FILE__);
file_put_contents($filepath . '/' . $filename, $output);
This should save the output in a file within the same folder as the script with a date/time file name. (This code was written using examples from php.net, exact implementation may need a bit of debugging, but this should get you close enough.

FLASH AS2 and PHP variables

My Flash movie reads and sends data to a PHP file in a free server. It seems ok for Flash to read variable values from a text file (which is managed by a PHP file) if they are wrote in this way: &variable = value&, I have no problem with that.
But my PHP file, pre-treated (by some mathematical functions) data sent by Flash and then, updates the values in the text file, that is my intention but I can't accomplish it.
Suppose I want to update a counter ( it counts how many times the data were updated):
in the text file I have &counter=0& (initial value) and if I put in the PHP file:
<?php
$fp = fopen("jose_stats.txt", "r");// I guess with it, I've read all the variables and values
// one of them is the variable &counter.
fclose($fp);
$toSave = "&counter=&counter+1&\n";
$fp = fopen("jose_stats.txt", "w");
if(fwrite($fp, "$toSave")) {
echo "&verify=success&"; //prints to screen &verify=success which flash will read
//and store as myVars.verify
} else { // simple if statement
echo "&verify=fail&"; //prints to screen &verify=fail which flash will read and
//store as myVars.verify
}
fclose($fp);
?>
but then, I check my text file and it has &counter=&counter+1& line :( and not the expected &counter =1&.
Please, give me and advise. Thank you.
Why not use JSON?
Just store the data in JSON format:
$count = 1;
$toWrite = array( 'count' => $count );//put other data into this array if you want
//encode it
$toWrite = json_encode( $toWrite );
//and now write the data
To decode it in flash, import the JSON class:
An example of JSON in as2 using the JSON.as class:
try {
var o:Object = JSON.parse(jsonStr);
var s:String = JSON.stringify(obj);
} catch(ex) {
trace(ex.name + ":" + ex.message + ":" + ex.at + ":" + ex.text);
}
So just import the class, and run JSON.parse( yourPhpResponse );.
Also, the reason for why you're seeing &counter=& in the text file is because you're storing it like that: $toSave = "&counter=&counter+1&\n";.

Read remote mp3 file information using php

I am working on a site which will fetch mp3 details from a remote url. I need to write a cron job so that it gets all the song information such as the file name, path, artist, genre, bitrate, playing time, etc and put it in a database table.
I tried getid3 package, but this is very slow when fetching more than one url at a time and I get maximum execution error.
Example:
require_once('getid/getid3/getid3.php');
$urls = array ('http://stackoverflow.com/test1.mp3','http://stackoverflow.com/test2.mp3''http://stackoverflow.com/test3.mp3');
foreach($urls as $ur){
$mp3_det = getMp3Info( $ur );
print_r ($mp3_det);
}
function getMp3Info ( $url ){
if($url){
/**********/
$filename = tempnam('/tmp','getid3');
if (file_put_contents($filename, file_get_contents($url, false, null, 0, 35000))) {
if (require_once('getid/getid3/getid3.php')) {
$getID3 = new getID3;
$ThisFileInfo = $getID3->analyze($filename);
unlink($filename);
$bitratez = $ThisFileInfo[audio][bitrate] ? $ThisFileInfo[audio][bitrate] : '';
$headers = get_headers($url, 1);
if ((!array_key_exists("Content-Length", $headers))) { return false; }
// print $headers["Content-Length"];
$filesize= round($headers["Content-Length"]/1000);
$contentLengthKBITS=$filesize*8;
if ( $bitratez ){
$bitrate= round ( $bitratez/1000 );
$seconds=$contentLengthKBITS/$bitrate;
$playtime_mins = floor($seconds/60);
$playtime_secs = $seconds % 60;
if(strlen($playtime_secs)=='1'){$zero='0';}
$playtime_secs = $zero.$playtime_secs;
$playtime_string=$playtime_mins.':'.$playtime_secs;
}
else $playtime_string='0:00';
// echo '<pre>'.print_r($ThisFileInfo, true).'</pre>';
}
$bitrate = $bitrate ? $bitrate : 0;
$ret = array();
$ret['playtime'] = $playtime_string;
$ret['filesize'] = $filesize;
$ret['bitrate'] = $bitrate;
return $ret;
}
}
You may be able to help the execution time by using a socket connection and reading in chunks of the file at a time, and continuously trying to analyze the file.
Since ID3 data is stored in the beginning of the mp3, there is no point in downloading the entire thing. THe biggest problem I see right now is that the analyze function only takes a filename, not binary data (which is what you would have). So, you would have to either update the code, or make a similar function to analyze that works with your binary data.
MP3 files comes along with some kind of Meta Data almost same way to some other binary file formats. They are in ID tags. There are many versions of ID tags, like ID3 or ID4 tags. Now, there is easy way to extract IDv3 tag informations supplied along with MP3 file, through PHP.
You need to download some library in PHP from sourceforge, like getID3. This way you can extract artist name, genre, duration, length, size etc information from an mp3 file. IDv4 contains additional informations such as album art.

Using FPDI and FDP to generate slightly different pdf files

i first import a pdf using fpdi to make a fpdf object, i then perform several changes on that pdf. I clone it to make a custom pdf just adding some texts. Then i output the two files to disk but just one is created and i got a fatal error for the second output :
Fatal error: Call to undefined method stdClass::closeFile() in C:\Program Files\EasyPHP 3.0\www\oursin\oursin\public\scripts\FPDI\fpdi.php on line 534
pieces of my code:
$pdf = new FPDI('L','mm',array(291.6,456));
$fichier=$repertoireGrilles.'GR_IFR.pdf';
$pdf->setSourceFile($fichier);
// add a page
$tplIdx = $pdf->importPage(1);
$pdf->AddPage();
$pdf->useTemplate($tplIdx,0,0,0);
..
...
methods on $pdf
..
..
..
$pdfCopie=clone $pdf;
methods on $pdfCopie
$pdfCopie-> Output($repertoireGrilles.'grillesQuotidiennes/'.$date.'/Grille_'.$date.'_'.$ou.'_copie.pdf','F');
$pdf-> Output($repertoireGrilles.'grillesQuotidiennes/'.$date.'/Grille_'.$date.'_'.$ou.'.pdf','F');
Anybody to help me to tackle this issue that keeps my brain under high pressure for hours (days) :) ?
Cloning, forking, copying, any of that is really dirty. You will have a very hard time with outputs if you take that route. Instead, consider this approach:
Make multiple AJAX calls to a single PHP file, pass a pid value to it so as to differentiate between them.
Go through the exact same document setup for FPDI. This is far more consistent than cloning, forking, copying, etc.
Check pid and do different things to different documents after all the setup is done.
Output the documents.
Here is my jQuery:
$(document).ready(function(){
var i;
for( i=0; i<=1; i++ )
{
$.ajax({
url: 'pdfpid.php',
data: {
pid: i,
pdf: 'document.pdf'
},
type: 'post'
});
}
});
As you can see, it's pretty simple. pdfpid.php is the name of the file that will generate and process the documents. In this case, I want the document with a pid of 0 to be my "original" and the one with a pid of 1 to be the "cloned" document.
// Ensure that POST came in correctly
if( !array_key_exists('pid',$_POST) || !array_key_exists('pdf',$_POST) )
exit();
// Populate necessary variables from $_POST
$pid = intval($_POST['pid']);
$src = $_POST['pdf'];
// Setup the PDF document
$pdf = new FPDI();
$pdf->setSourceFile($src);
$templateID = $pdf->importPage(1);
$pdf->addPage();
$pdf->useTemplate($templateID);
$pdf->SetFont('Arial','B',24);
switch( $pid )
{
default:
break;
case 0:
// "Parent" document
$pdf->Text(10,10,"ORIGINAL");
$filename = "original.pdf";
break;
case 1:
// "Child" document
$pdf->Text(10,10,"CLONED");
$filename = "cloned.pdf";
break;
}
$pdf->Output($filename,'F');
I got both documents as an output, with the unique modifications between the "parent" and the "child" all in place.

Image upload storage strategies

When a user uploads an image to my site, the image goes through this process;
user uploads pic
store pic metadata in db, giving the image a unique id
async image processing (thumbnail creation, cropping, etc)
all images are stored in the same uploads folder
So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.
EDIT:
Creating username (or more specifically, userid) subfolders would seem to be a good solution. With a bit more digging, I've found some great info right here; How to store images in your filesystem
However, would this userid dir approach scale well if a CDN is bought into the equation?
I've answered a similar question before but I can't find it, maybe the OP deleted his question...
Anyway, Adams solution seems to be the best so far, yet it isn't bulletproof since images/c/cf/ (or any other dir/subdir pair) could still contain up to 16^30 unique hashes and at least 3 times more files if we count image extensions, a lot more than any regular file system can handle.
AFAIK, SourceForge.net also uses this system for project repositories, for instance the "fatfree" project would be placed at projects/f/fa/fatfree/, however I believe they limit project names to 8 chars.
I would store the image hash in the database along with a DATE / DATETIME / TIMESTAMP field indicating when the image was uploaded / processed and then place the image in a structure like this:
images/
2010/ - Year
04/ - Month
19/ - Day
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Or:
images/
2010/ - Year
0419/ - Month & Day (12 * 31 = 372)
231c2ee287d639adda1cdb44c189ae93.png - Image Hash
Besides being more descriptive, this structure is enough to host hundreds of thousands (depending on your file system limits) of images per day for several thousand years, this is the way Wordpress and others do it, and I think they got it right on this one.
Duplicated images could be easily queried on the database and you'd just have to create symlinks.
Of course, if this is not enough for you, you can always add more subdirs (hours, minutes, ...).
Personally I wouldn't use user IDs unless you don't have that info available in your database, because:
Disclosure of usernames in the URL
Usernames are volatile (you may be able to rename folders, but still...)
A user can hypothetically upload a large number of images
Serves no purpose (?)
Regarding the CDN I don't see any reason this scheme (or any other) wouldn't work...
MediaWiki generates the MD5 sum of the name of the uploaded file, and uses the first two letters of the MD5 (say, "c" and "f" of the sum "cf1e66b77918167a6b6b972c12b1c00d") to create this directory structure:
images/c/cf/Whatever_filename.png
You could also use the image ID for a predictable upper limit on the number of files per directory. Maybe take floor(image unique ID / 1000) to determine the parent directory, for 1000 images per directory.
Yes, yes I know this is an ancient topic. But the problem to store large amount of images and how the underlying folder structure should be organized. So I present my way to handle it in the hope this might help some people.
The idea using md5 hash is the best way to handle massive image storage. Keeping in mind that different values might have the same hash I strongly suggest to add also the user id or nicname to the path to make it unique. Yep that's all what's needed. If someone has different users with the same database id - well, there is something wrong ;) So root_path/md5_hash/user_id is everything you need to do it properly.
Using DATE / DATETIME / TIMESTAMP is not the optimal solution by the way IMO. You end up with big clusters of image folders on a buisy day and nearly empty ones on less frequented ones. Not sure this leads to performance problems but there is something like data aesthetics and a consistent data distribution is always superior.
So I clearly go for the hash solution.
I wrote the following function to make it easy to generate such hash based storage paths. Feel free to use it if you like it.
/**
* Generates directory path using $user_id md5 hash for massive image storing
* #author Hexodus
* #param string $user_id numeric user id
* #param string $user_root_raw root directory string
* #return null|string
*/
function getUserImagePath($user_id = null, $user_root_raw = "images/users", $padding_length = 16,
$split_length = 3, $hash_length = 12, $hide_leftover = true)
{
// our db user_id should be nummeric
if (!is_numeric($user_id))
return null;
// clean trailing slashes
$user_root_rtrim = rtrim( $user_root_raw, '/\\' );
$user_root_ltrim = ltrim( $user_root_rtrim, '/\\' );
$user_root = $user_root_ltrim;
$user_id_padded = str_pad($user_id, $padding_length, "0", STR_PAD_LEFT); //pad it with zeros
$user_hash = md5($user_id); // build md5 hash
$user_hash_partial = $hash_length >=1 && $hash_length < 32
? substr($user_hash, 0, $hash_length) : $user_hash;
$user_hash_leftover = $user_hash_partial <= 32 ? substr($user_hash, $hash_length, 32) : null;
$user_hash_splitted = str_split($user_hash_partial, $split_length); //split in chunks
$user_hash_imploded = implode($user_hash_splitted,"/"); //glue aray chunks with slashes
if ($hide_leftover || !$user_hash_leftover)
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_id_padded}"; //build final path
else
$user_image_path = "{$user_root}/{$user_hash_imploded}/{$user_hash_leftover}/{$user_id_padded}"; //build final path plus leftover
return $user_image_path;
}
Function test calls:
$user_id = "1394";
$user_root = "images/users";
$user_hash = md5($user_id);
$path_sample_basic = getUserImagePath($user_id);
$path_sample_advanced = getUserImagePath($user_id, "images/users", 8, 4, 12, false);
echo "<pre>hash: {$user_hash}</pre>";
echo "<pre>basic:<br>{$path_sample_basic}</pre>";
echo "<pre>customized:<br>{$path_sample_advanced}</pre>";
echo "<br><br>";
The resulting output - colorized for your convenience ;):
Have you thought about using something like Amazon S3 to store the files? I run a photo hosting company and after quickly reaching limits on our own server, we switched over to AmazonS3. The beauty of S3 is that there are no limits like inodes and what not, you just keep throwing files at it.
Also: If you don't like S3, you can always try and break it down into subfolders as much as you can:
/userid/year/month/day/photoid.jpg
You can convert a username to md5 and set a folder from 2-3 first letters of md5 converted username for the avatars and for images you can convert and playing with time , random strings , ids and names
8648b8f3ce06a7cc57cf6fb931c91c55 - devcline
Also a first letter of the username or id for the next folder or inverse
It will look like
Structure:
stream/img/86/8b8f3ce06a7cc57cf6fb931c91c55.png //simplest
stream/img/d/2/0bbb630d63262dd66d2fdde8661a410075.png //first letter and id folders
stream/img/864/d/8b8f3ce06a7cc57cf6fb931c91c55.png // with first letter of the nick
stream/img/864/2/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id
stream/img/2864/8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in 3 letters
stream/img/864/2_8b8f3ce06a7cc57cf6fb931c91c55.png //with unique id in picture name
Code
$username = substr($username_md5, 1); // to cut first letter from the md5 converted nick
$username_first = $username[0]; // the first letter
$username_md5 = md5($username); // md5 for username
$randomname = uniqid($userid).md5(time()); //for generate a random name based on ID
you can try also with base64
$image_encode = strtr(base64_encode($imagename), '+/=', '-_,');
$image_decode = base64_decode(strtr($imagename, '-_,', '+/='));
Steam And dokuwiki use this structure.
You might consider the open source http://danga.com/mogilefs/ as it is perfect for what you're doing. It'll take you from thinking about folders to namespaces (which could be users) and let it store you images for you. The best part is you don't have to care how the data is stored. It makes it completely redundant and you can even set controls around how redundant thumbnails are as well.
I got soultion im using for a long time. It's quite old code, and can be further optimised, but it still serves good as it is.
It's a immutable function creating directory structure based on:
Number that identifies image (FILE ID):
it's recommended that this numer is unique for base directory, like primary key for database table, but it's not required.
The base directory
The maximum desired number of files and first level subdirectories. This promised can be kept only if every FILE ID is unique.
Example of usage:
Using explicitly FILE ID:
$fileName = 'my_image_05464hdfgf.jpg';
$fileId = 65347;
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileId );
$targetDir = $baseDir . $clusteredDir;
$targetPath = $targetDir . $fileName;
$targetURL = $baseURL . $clusteredDir . $fileName;
Using file name, number = crc32( filename )
$fileName = 'my_image_05464hdfgf.jpg';
$baseDir = '/home/my_site/www/images/';
$baseURL = 'http://my_site.com/images/';
$clusteredDir = \DirCluster::getClusterDir( $fileName );
$targetDir = $baseDir . $clusteredDir;
$targetURL = $baseURL . $clusteredDir . $fileName;
Code:
class DirCluster {
/**
* #param mixed $fileId - numeric FILE ID or file name
* #param int $maxFiles - max files in one dir
* #param int $maxDirs - max 1st lvl subdirs in one dir
* #param boolean $createDirs - create dirs?
* #param string $path - base path used when creatign dirs
* #return boolean|string
*/
public static function getClusterDir($fileId, $maxFiles = 100, $maxDirs = 10,
$createDirs = false, $path = "") {
// Value for return
$rt = '';
// If $fileId is not numerci - lets create crc32
if (!is_numeric($fileId)) {
$fileId = crc32($fileId);
}
if ($fileId < 0) {
$fileId = abs($fileId);
}
if ($createDirs) {
if (!file_exists($path))
{
// Check out the rights - 0775 may be not the best for you
if (!mkdir($path, 0775)) {
return false;
}
#chmod($path, 0775);
}
}
if ( $fileId <= 0 || $fileId <= $maxFiles ) {
return $rt;
}
// Rest from dividing
$restId = $fileId%$maxFiles;
$formattedFileId = $fileId - $restId;
// How many directories is needed to place file
$howMuchDirs = $formattedFileId / $maxFiles;
while ($howMuchDirs > $maxDirs)
{
$r = $howMuchDirs%$maxDirs;
$howMuchDirs -= $r;
$howMuchDirs = $howMuchDirs/$maxDirs;
$rt .= $r . '/'; // DIRECTORY_SEPARATOR = /
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
}
$rt .= $howMuchDirs-1;
if ($createDirs)
{
$prt = $path.$rt;
if (!file_exists($prt))
{
mkdir($prt);
#chmod($prt, 0775);
}
}
$rt .= '/'; // DIRECTORY_SEPARATOR
return $rt;
}
}

Categories