I need to store:
array(1,2,3,4,5);
into a mysql blob.
How can i convert this array into binary data?
It depends mostly on how you are using those informations. IDs are usually used to identify a resource, and thus must be unique, not null and indexable.
By those standarts do not use as blob.
Mostly because search by content is slower than as native variable. Also, SQL databases sort the content of a table to ensure faster queries.
If what you need is just storing information and then using another ID to identify this resource (and they can be easily parsed to strings/numbers then do not use blob). A binary file will usually use 8 bytes per char. A number could contain the same information using less total memory. Example, 1902334123 (random keyboard smash) uses 10*8 = 80 bytes in Hard disk, while an 32-bit signed integer could hold it.
Finally, if what you need is just storing several data units, what is your problem with a sequential varchar to be read as string, as it could solve your problem
you can convert to JSON and store to db:
json_encode($array);
and when you return from db:
json_decode($array);
Related
I have some text data I would like to store in a mysql database. I currently have the data stored in a variable as a string.
I'm concerned that the table will become quite large due to the amount of text data I have for each row.
Therefore, what is the most easiest way (preferably php built in functions) of compacting this string data in a format ideal for storage and retrieval?
You could GZIP the string with GZEncode.
That's pretty standard and thus should be reversible from other languages if you want to.
I would advise storing a Base64 version of the result.
If you're using InnoDB you can enable compression on entire tables which doesn't impact your code at all.
ALTER TABLE database.tableName ENGINE='InnoDB' ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
You can alter the KEY_BLOCK_SIZE to smaller values to get more compression (depending on the data), but this adds more overhead to the CPU.
After testing a range of tables, I found a KEY_BLOCK_SIZE of 8 to be a good balance of compression vs performance.
I am storing serialized data in a mysql and am unsure which field type to choose?
One example of the serialized data output is below,
string(393) "a:3:{s:4:"name";s:22:"PACMAN-Appstap.net.rar";s:8:"trackers";a:6:{i:0;s:30:"http://tracker.ccc.de/announce";i:1;s:42:"http://tracker.openbittorrent.com/announce";i:2;s:36:"http://tracker.publicbt.com/announce";i:3;s:23:"udp://tracker.ccc.se:80";i:4;s:35:"udp://tracker.openbittorrent.com:80";i:5;s:29:"udp://tracker.publicbt.com:80";}s:5:"files";a:1:{s:22:"PACMAN-Appstap.net.rar";i:4147632;}}"
The string lengths of the data can vary greatly upto around 20,000 characters.
I understand that I do not want to use TEXT data type as this could corrupt data because of character sets that it would have to use.
I am stuck as when it comes to use either VARBINARY, BLOB, MEDIUMBLOB etc.
Let us say if I use VARBINARY(20000) does this mean that I can insert a string of 20000 in length safely and if it is over then discard the insert?
I agree with PLB in that you should use BLOB. The length attribute specifies how many bytes can be saved in this column. The main difference between BLOB and VARBINARY is that VARBINARY fills up unused space with padding, wheras with BLOB only the actual length of the data is reserved for one field.
But as PLB said, only use this if you absolutely must, because it slows down the whole DB in most cases. A better solution would be to store the files in your server's filesystem and save the file's path in the DB.
As commonly discussed, (for example here Storing 0.00001 in MySQL ) the DECIMAL data-type should be used for fields where precision / correctness is required, such as an account balance.
I was wondering however, how PHP handles these values and, if they are internally handled as floats, if there is still a problem when reading these values from the database, doing some calculations and writing them back again. If so, how can we force PHP to keep precision in tact?
The variable is probably a string initially in PHP (when read from the MySQL result object). In general, PHP's floating-point datatype cannot be relied upon to keep the precise decimal value required. You should use an arbitrary-precision mathematics library like GMP. (When you fetch a row of the result object, pass the DECIMAL column value in to the appropriate constructor, and then operate on it using the functions provided by the library you are using.)
To go more into depth: Suppose you have an amount stored in the database, in a DECIMAL(6, 4) column. You want to fetch that into PHP, to do something with it. You issue a query to fetch that column. You fetch the first row of the query into an associative array. Suppose that the value from that row is 2.5674. Your array is now something like array('MyDecimal' => '2.5674') (the number appears as a string). You use (as far as I can tell) gmp_init() to convert that string to a GMP resource. Now you can do mathematics with that number using the other GMP functions. If you want to store a GMP number, you could convert it back to string using gmp_strval() (perhaps you do not have to do this if you are using a database abstraction layer that can handle GMP resources).
You could try using the arbitrary precision features:
http://php.net/manual/en/book.bc.php
Your other option is to store the values as INTs and then convert them when they need to be displayed (i.e. divide by one hundred).
I'm referencing this Store GZIP:ed text in mysql?.
I want to store serialized sessions in the database (they are actually stored in a memcached pool but i have this as a failsafe). I am gziping/uncompressing from php.
I want to ask the following:
1) Is this a good move? I am doing this to avoid using mediumtext as the data may be bigger than text. I think/hope i will have a lot of sessions stored there. Is it, in this case, worth to gzip? Table is MyISAM.
2) Do i need to set the encoding of the table field to binary? Or only do that if i have a complete gziped file?
3) Is serializing a bad move, should i use json_encode instead (because of the smaller size i guess)?
Thanks,
You should use a MEDIUMBLOB field instead of MEDIUMTEXT. BLOBs have no encoding, as they are raw byte streams.
I have an Oracle SQL database with a CLOB column that holds a great deal of data. I'm running into the issue that a regular PHP string variable will not hold all of the data that I have in the CLOB column. It will only read in 4618 bits, but my file is much larger. The CLOB column has a series of IP addresses in it. What I need to do is parse the CLOB column so I can extract those IP addresses; however, the string variable won't hold enough data to even get to the portion of the document where the IP addresses are stored. Any thoughts?
Based on your comments:
If you right click in your browser and say "view source" you'll see that you indeed have the entire file.
Most likely you just need to set the appropriate ContentType when emitting the data back to the browser.
First, try increasing the PHP memory limit to see if that has any effect on the error. If the CLOB object being retrieved is truly huge, like 2 GiB, then there is no way (currently) to process it in PHP since the whole object must be held in memory.
If the column has fixed length data, it should be fairly easy to compose an appropriate query which extracts a portion of it. Assuming the data is encoded as fixed-length text, then something like this will work:
// assumes subfield is xxx.xxx.xxx.xxx: 19 fixed characters
SELECT substr (TO_CHAR(gnarly_ipadddress_fieldname), 1, 19) as ipaddr1,
substr (TO_CHAR(gnarly_ipadddress_fieldname), 20, 19) as ipaddr2,
substr (TO_CHAR(gnarly_ipadddress_fieldname), 39, 19) as ipaddr3
FROM table
WHERE (whatever);
If the subfields are variable length, then maybe grabbing something like 16K chunks and parsing it in PHP is the way to go.