I have a table with column name = recording_size.
In this column I am storing the size in bytes,
but on the Web I am showing the size in KB.
public static function getKbFromBytes($string){
if($string){
return ($string/1024);
}
}
Now I have a Filtration Functionality in Web. So as I am showing size in KB so I'll certainly take Input for Searching from user in KB & not in bytes although in DB I have that in bytes. For that I take input in KB and than convert it again to bytes like:
public static function getBytesFromKb($string){
if($string){
return ($string * 1024);
}
}
Example:
size in Bytes = 127232
When I apply my function so it = 124.25 KB
now when user write exactly like 124.25 then the search works,
but I want that user does not write exactly the same.
The user can also write 124 instead of 124.54, and when the user writes 124 then my search is not working — meaning it does not show any records.
I have also ADD & Subtract 50 from the converted bytes but it is not working.
$sql = $sql->where('r.recording_size BETWEEN "'.(Engine::getBytesFromKb($opt['sn']) - 50) .'" AND "'.(Engine::getBytesFromKb($opt['sn']) + 50) .'"');
How can I achieve this?
Searching for size should probably be as a range instead of an equality anyway. At least, the default should be the range, unless your app's primary focus is the exact size. For the SQL:
$kb = round($opt['sn']);
$sql = $sql->where('r.recording_size BETWEEN '.($kb * 1024).' AND '.(($kb + 1) * 1024));
By the way, you should omit the quotation marks ("). They are invalid/nonstandard even for strings. And you are comparing int's.
Another thing to watch for is KiB vs KB.
Related
I have a file with 3,200,000 lines of csv data (with 450 columns). Total file size is 6 GB.
I read the file like this:
$data = file('csv.out');
Without fail, it only reads 897,000 lines. I confirmed with 'print_r', and echo sizeof($data). I increased my "memory_limit" to a ridiculous value like 80 GB but didn't make a difference.
Now, it DID read in my other large file, same number of lines (3,200,000) but only a few columns so total file size 1.1 GB. So it appears to be a total file size issue. FYI, 897,000 lines in the $data array is around 1.68 GB.
Update: I increased the second (longer) file to 2.1 GB (over 5 million lines) and it reads it in fine, yet truncates the other file at 1.68 GB. So does not appear to be a size issue. If I continue to increase the size of the second file to 2.2 GB, instead of truncating it and continuing the program (like it does for the first file), it dies and core dumps.
Update: I verified my system is 64 bit by printing integer and float numbers:
<?php
$large_number = 2147483647;
var_dump($large_number); // int(2147483647)
$large_number = 2147483648;
var_dump($large_number); // float(2147483648)
$million = 1000000;
$large_number = 50000 * $million;
var_dump($large_number); // float(50000000000)
$large_number = 9223372036854775807;
var_dump($large_number); //
int(9223372036854775807)
$large_number = 9223372036854775808;
var_dump($large_number); //
float(9.2233720368548E+18)
$million = 1000000;
$large_number = 50000000000000 * $million;
var_dump($large_number); // float(5.0E+19)
print "PHP_INT_MAX: " . PHP_INT_MAX . "\n";
print "PHP_INT_SIZE: " . PHP_INT_SIZE . " bytes (" . (PHP_INT_SIZE * 8) . " bits)\n";
?>
The output from this script is:
int(2147483647)
int(2147483648)
int(50000000000)
int(9223372036854775807)
float(9.2233720368548E+18)
float(5.0E+19)
PHP_INT_MAX: 9223372036854775807
PHP_INT_SIZE: 8 bytes (64 bits)
So since it's 64 bit, and memory limit is set really high, why is PHP not reading files > 2.15 GB?
Some things that come to mind:
If you're using a 32 bits PHP, you cannot read files that are larger than 2GB.
If reading the file takes too long, there could be time-outs.
If the file is really huge, then reading it all into memory is going to be problematic. It's usually better to read blocks of data and process that, unless you need random access to all parts of the file.
Another approach (i've used that in the past), is to chop the large file into smaller, more manageable ones (should work if it's a straightforwards log file for example)
I fixed it. All I had to do was change the way I read the files. Why...I do not know.
Old code that only reads 2.15 GB out of 6.0 GB:
$data = file('csv.out');
New code that reads the full 6.0 GB:
$data = array();
$i=1;
$handle = fopen('csv.out');
if ($handle) {
while (($data[$i] = fgets($handle)) !== false){
// process the line read
$i++;
}
Feel free to shed some light on why. There must be some limitation when using
$var=file();
Interestingly, 2.15 GB is close to the 32 bit limit I read about.
I have this code for checking max image size allowed: The one below is for 4 MB
elseif (round($_FILES['image_upload_file']["size"] / 1024) > 4096) {
$output['error']= "You can upload file size up to 4 MB";
I don't understand this calculation and approaches from the internet is making it more confusing
I wanted the size for
8 MB
PHP $_FILES["image_upload_file"]["size"] variable return the value of file size in BYTES. So, for check the file size you have two option,
Convert your checking limit into BYTES, and check with the $_FILES["image_upload_file"]["size"] value. As, 5MB= 5000000KB, 6MB= 6000000KB, 8MB= 8000000KB and so on. (Values are simplified)
Convert the $_FILES["image_upload_file"]["size"] value in to MB and check.
For me check the value in BYTES. It is more easier and you no need to calculate any thing.
In your example, the values are calculate into KB and then checking. As, $_FILES['image_upload_file']["size"] / 1024 return value in KB and 4MB= 4096 KB. So, your internet code also right.
If you want to use your internet code for 8MB then change the 4096 to 8192. It will work same.
Hope, now you understand the code.
This question already has answers here:
Measure string size in Bytes in php
(5 answers)
Closed 8 years ago.
I have an encrypted image and before saving it I would like to know how much space it takes up. I can get the number of characters via strlen($img) or via mb_strlen($img) but I would like to get a number like 16KiB (or KB).
I then save the string into a MySQL database in blob format, where I can see the size of it using PhpMyAdmin.
EDIT
If I use strlen to get the byte size of the string (which I want) I get a different value from the byte size displayed in my MySQL database (where the string is not saved as a char but as a blog, meaning binary). How can this be? And how can I find out how large the binary size will be when I save the string in the database.
I save the string simply with the MySQL command
INSERT INTO table (content, bla) VALUES ($string, bla);
(not fully correct but for example purpose – this works when correct)
Now when I look inside my database it displays me a size e.g 315 KB but when I take $string and do strlen on it, it returns something like 240000 (Not the same in bits as in KB)
I will investigate my self...
This does essentially the same thing as Dany's answer, but a little more compact.
function human_filesize($bytes, $decimals = 2) {
$size = array('B','kB','MB','GB','TB','PB','EB','ZB','YB');
$factor = floor((strlen($bytes) - 1) / 3);
return sprintf("%.{$decimals}f", $bytes / pow(1024, $factor)) . #$size[$factor];
}
echo human_filesize(filesize($filename));
Source: http://jeffreysambells.com/2012/10/25/human-readable-filesize-php
This is going to be a nice little brainbender I think. It is a real life problem, and I am stuck trying to figure out how to implement it. I don't expect it to be a problem for years, and at that point it will be one of those "nice problems to have".
So, I have documents in my search engine index. The documents can have a number of fields, however, each field size must be limited to only 100kb.
I would like to store the IDs, of particular sites which have access to this document. The site id count is low, so it is never going to get up into the extremely high numbers.
So example, this document here can be accessed by sites which have an ID of 7 and 10.
Document: {
docId: "1239"
text: "Some Cool Document",
access: "7 10"
}
Now, because the "access" field is limited to 100kb, that means that if you were to take consecutive IDs, only 18917 unique IDs could be stored.
Reference:
http://codepad.viper-7.com/Qn4N0K
<?php
$ids = range(1,18917);
$ids = implode(" ", $ids);
echo mb_strlen($ids, '8bit') / 1024 . "kb";
?>
// Output
99.9951171875kb
In my application, a particular site, of site ID 7, tries to search, and he will have access to that "Some Cool Document"
So now, my question would be, is there any way, that I could some how fit more IDs into that field?
I've thought about proper encoding, and applying something like a Huffman Tree, but seeing as each document has different IDs, it would be impossible to apply a single encoding set to every document.
Prehaps, I could use something like tokenized roman numerals?
Anyway, I'm open to ideas.
I should add, that I want to keep all IDs in the same field, for as long as possible. Searching over a second field, will have a considerable performance hit. So I will only switch to using a second access2 field, when I have milked the access field for as long as possible.
Edit:
Convert to Hex
<?php
function hexify(&$item){
$item = dechex($item);
}
$ids = range(1,21353);
array_walk( $ids, "hexify");
$ids = implode(" ", $ids);
echo mb_strlen($ids, '8bit') / 1024 . "kb";
?>
This yields a performance boost of 21353 consecutive IDs.
So that is up like 12.8%
Important Caveat
I think the fact that my fields can only store UTF encoded characters makes it next to impossible to get anything more out of it.
Where did 18917 come from? 100kb is a big number.
You have 100,000 or so bytes. Each byte can be 255 long, if you store it as a number.
If you encode as hex, you'll get 100,000 ^ 16, which is a very large number, and that just hex encoding.
What about base64? You stuff 3 bytes into a 4 byte space (a little loss), but you get 64 characters per character. So 100,000 ^ 64. That's a big number.
You won't have any problems. Just do a simple hex encoding.
EDIT:
TL;DR
Let's say you use base64. You could fit 6.4 times more data in the same spot. No compression needed.
How about using data compression?
$ids = range(1,18917);
$ids = implode(" ", $ids);
$ids = gzencode($ids);
echo mb_strlen($ids, '8bit') / 1024 . "kb"; // 41.435546875kb
What is the best way to calculate the length of flv file using php with out external dependencies like ffmpege because client site run on shared hosting,
itry http://code.google.com/p/flv4php/, but it extract metadata and not all video contain meta data ?
There's a not too complicated way to do that.
FLV files have a specific data structure which allow them to be parsed in reverse order, assuming the file is well-formed.
Just fopen the file and seek 4 bytes before the end of the file.
You will get a big endian 32 bit value that represents the size of the tag just before these bytes (FLV files are made of tags). You can use the unpack function with the 'N' format specification.
Then, you can seek back to the number of bytes that you just found, leading you to the start of the last tag in the file.
The tag contains the following fields:
one byte signaling the type of the tag
a big endian 24 bit integer representing the body length for this tag (should be the value you found before, minus 11... if not, then something is wrong)
a big endian 24 bit integer representing the tag's timestamp in the file, in milliseconds, plus a 8 bit integer extending the timestamp to 32 bits.
So all you have to do is then skip the first 32 bits, and unpack('N', ...) the timestamp value you read.
As FLV tag duration is usually very short, it should give a quite accurate duration for the file.
Here is some sample code:
$flv = fopen("flvfile.flv", "rb");
fseek($flv, -4, SEEK_END);
$arr = unpack('N', fread($flv, 4));
$last_tag_offset = $arr[1];
fseek($flv, -($last_tag_offset + 4), SEEK_END);
fseek($flv, 4, SEEK_CUR);
$t0 = fread($flv, 3);
$t1 = fread($flv, 1);
$arr = unpack('N', $t1 . $t0);
$milliseconds_duration = $arr[1];
The two last fseek can be factorized, but I left them both for clarity.
Edit: Fixed the code after some testing
The calculation to get the duration of a movie is roughly this:
size of file in bytes / (bitrate in kilobits per second / 8)