Mysql phpMyAdmin few questions: - php

I am quite new to the mysql phpMyadmin environment, and I would like to have some area
1. I need a field of text that should be up to around 500 characters.
Does that have to be "TEXT" field? does it take the application to be responsible for the length ?
indexes. I understand that when I signify a field as "indexed", that means that field would have a pointer table and upon each a WHERE inclusive command, the search would be optimized by that field (log n complexity). But what happens if I signify a field as indexed after the fact ? say after it has some rows in it ? can I issue a command like "walk through all that table and index that field" ?
When I mark fields as indexed, I sometimes get them in phpMyAdmin as having the keyname
for accessing the table by the indexed field when I write php, does it take an extra effort on my side to use that keyname that is written down there at the "structure" view to use the table as indexed, or does that keyname is being used behind the scenes and I should not care about it whatsoever ?
I sometimes get the keynames referencing two or more fields altogether. The fields show one on top of the other. I don't know how it happened, but I need them to index only one field. What is going on ?
I use UTF-8 values in my db. When I created it, I think I marked it as utf8_unicode_ci, and some fields are marked as utf8_general_ci, does it matter ? Can I go back and change the whole DB definition to be utf8_general_ci ?
I think that was quite a bit,
I thank you in advance!
Ted

First, be aware that this not per se something about phpmyadmin, but more about mysql / databases.
1)
An index means that you make a list (most of the time a tree) of the values that are present. This way you can easily find the row with that/those values. This tree can be just as easily made after you insert values then before. Mind you, this means that all the "add to index" commands are put together, so not something you want to do on a "live" table with loads of entries. But you can add an index whenever you want it. Just add the index and the index will be made, either for an empty table or for a 'used' one.
2)
I don't know what you mean by this. Indexes have a name, it doesn't really matter what it is. A (primary) key is an index, but not all indexes are keys.
3)
You don't need to 'force' mysql to use a key, the optimizer knows best how and when to use keys. If your keys are correct they are used, if they are not correct they can't be used so you can't force it: in other words: don't think about it :)
4)
PHPMYADMIN makes a composite keys if you mark 2 fields as key at the same time. THis is annoying and can be wrong. If you search for 2 things at once, you can use the composite key, but if you search for the one thing, you can't. Just mark them as a key one at a time, or use the correct SQL command manually.
5)
you can change whatever you like, but I don't know what will happen with your values. Better check manually :)

If you need a field to contain 500 characters, you can do that with VARCHAR. Just set its length to 500.
You don't index field by field, you index a whole column. So it doesn't matter if the table has data in it. All the rows will be indexed.
Not a question
The indexes will be used whenever they can. You only need to worry about using the same columns that you have indexed in the WHERE section of your query. Read about it here
You can add as many columns as you wish in an index. For example, if you add columns "foo", "bar" and "ming" to an index, your database will be speed optimized for searches using those columns in the WHERE clause, in that order. Again, the link above explains it all.
I don't know. I'm 100% sure that if you use only UTF-8 values in the database, it won't matter. You can change this later though, as explained in this Stackoverflow question: How to convert an entire MySQL database characterset and collation to UTF-8?
I would recommend you scrap PHPMyAdmin for HeidiSQL though. HeidiSQL is a windows client that manages all your MySQL servers. It has lots of cool functions, like copying a table or database directly from one MySQL server to another. Try it out (it's free)

Related

Column varchar and issue about index or fulltext?

Well I have a column varchar for password on my table and at some scripts i make queries like:
length(column_varchar) < 10
My question is if i put a index on this column, it will help? or in this case should use fulltext? or don't need a index?
Another question i need to use index in all columns that will be used in 'where'?
Thanks in advanced.
Indexes are used to index content (field value), not the length of the field, therefore no index can help in the above query. (N. B. you could have a sparate field that has the content length and index that separate field.) Also, the password should be stored in a hashed format, so all password lengths should be the same, or at least should not be a criteria for selection.
No, you should not index all columns that will be used in a where criteria. Selecting the optimal index structure is a complicated and very broad topic. Always consider the following points when trying to determine what fields (or combination of fields) to index:
Indexes speed up selects, but slow down data modification, since you have to update the index as well, not just the column's value.
MySQL can use only 1 index per table in a query.
MySQL uses the selectivity of the indexes to determine which one to use. A field that can have 2 values only (yes / no, true / false) is not selective enough, so do not trouble yourself with indexing it.
Always use the explain command to check which indexes your queries use.
You've got two questions here, in general you should split questions up.
Anyway, the first "Will it help indexing a column where you doing a test for length."
No, it won't. The only way you could improve the performance here would be to have an additional column that holds the length of the value in column_varchar and index that.
You wrote in comments that you are holding hashes, so the lengths will all be the same, so I have to guess that some passwords are null and so you don't hash them, or that you are migrating from not hashed to hashed.
The second question: should you index all fields in a where clause. This is not an automatic yes, which is why there are books written about query optimisation.
It depends on how much benefit you will get from the index, and that depends on the nature of the data.
The main trade off is between insert speed and query speed. Indexes slow inserts and speed up queries.
The next thing to consider is selectivity. If the value you are indexing has only three potential values, for example, the database will need frequent updating of the index to get real value from it.
In this specific case, you have evenly distributed data ( because it is hashed), you have great selectivity ( MD5 has few collisions) and you are expecting to query more often with a single term, so you should definitely be indexing this column.

Is there a way to compress a MySQL column where values repeat very often?

I have a InnoDB table with a VARCHAR column, with tens of thousands instances of the same text under it. Is there a way to compact it on-the-fly in order to save space? Is some kind of INDEX enough?
Can't InnoDB see that the values are the same, and use less space by internally assigning them some ID or whatever?
If the task is as simple as it seems, then what you are looking for is normalisation.
In simple terms, what you have to do is make this column contain Foreign Keys to another table, which has the values for this table. Now, store newer values in the other table, and when a value previously exists you do not need to make another entry for that in the table. Form this relation between the tables and in your original table a huge amount of space will be saved.
I suggest you to read up about redundancies and normalisation.
Hope it solves your problem.
You can use MySQL ENUM data type. It stores the values as indexes, but upon select you see the text value.
Here is the documentation:
http://dev.mysql.com/doc/refman/5.7/en/enum.html
Cons are that not all databases support ENUM type so you may find that as a problem if some day you decide to switch databases.
There also some other limitations pointed here:
http://dev.mysql.com/doc/refman/5.7/en/enum.html#enum-limits

Combine Multiple Rows in MySQL into JSON or Serialize

I currently have a database structure for dynamic forms as such:
grants_app_id user_id field_name field_value
5--------------42434----full_name---John Doe
5--------------42434----title-------Programmer
5--------------42434----email-------example#example.com
I found this to be very difficult to manage, and it filled up the number rows in the database very quickly. I have different field_names that can vary up to 78 rows, so it proved to be very costly when making updates to the field_values or simply searching them. I would like to combine the rows and use either json or php serialize to greatly reduce the impact on the database. Does anyone have any advice on how I should approach this? Thank you!
This would be the expected output:
grants_app_id user_id data
5--------------42434----{"full_name":"John Doe", "title":"Programmer", "email":"example#example.com"}
It seems you don't have a simple primary key in those rows.
Speeding up the current solution:
create an index for (grants_app_id, user_id)
add an auto-incrementing primary key
switch from field_name to field_id
The index will make retrieving full-forms a lot more fun (while taking a bit extra time on insert).
The primary key allow you to update a row by specifying a single value backed by a unique index, which should generally be really fast.
You probably already have some definition of fields. Add integer-IDs and use them to speed up the process as less data is stored, compared, indexed, ...
Switching to a JSON-Encoded variant
Converting arrays to JSON and back can be done by using json_encode and json_decode since PHP 5.2.
How can you switch to JSON?
Possibly the current best way would be to use a PHP-Script (or similar) to retrieve all data from the old table, group it correctly and insert it into a fresh table. Afterwards you may switch names, ... This is an offline approach.
An alternative would be to add a new column and indicate by field_name=NULL that the new column contains the data. Afterwards you are free to convert data at any time or store only new data as JSON.
Use JSON?
While certainly it is tempting to have all data in one row there are somethings to remember:
with all fields preserved in a single text-field searching for a value inside a field may become a two-phase approach, as a % inside any LIKE can skip into other field's values. Also LIKE '%field:value%' is not easily optimized by indexing the column.
changing a single field means updating all stored fields. As long as you are sure only one process changes the data at any given time this is ok, otherwise there tend to be more problems.
JSON-column needs to be big enough to hold field-names + values + separators. This can be a lot. Also if you miss-calculate a long value in any field means a truncation with the risk of loosing all information on all fields after the long value
So in your case even with 78 different fields it may still be better two have a row per formular user and field. (It may even turn out that JSON is more practicable for formulars with few fields).
As explained in this question you have to remember that JSON is only some other text to MySQL.

save array in mysql field and search in that field

I have a mysql table looking like this:
id
some_field1
some_field2
variable_fields
datetime
...
Now I want to store more than 1 value in variable_fields like this:
user_id:5;message_id:10
The reason why I do not create a separate field for every value I want to store is that these values differ throughout the project. So I am storing different values along the project.
At some time variable_fields contains this value:
user_id:5;message_id:10
And at some other time it contains this value:
car_id:56;payment_id:45
This wouldn't be a big problem but I want to be able to search in this field. So something like: variable_fields LIKE '%payment_id:45%'.
This obviously takes time for mysql.. Is there another way of handling this instead of creating a field for every value? So some kind of dynamic field in mysql?
I happy for every kind of help. Thank you in advance!
Best regards,
Freddy
If you'll add a myisam full-text index or employ any other full-text tools on that column (e.g. sphinx, lucene) those searches you described will work much better, however that isn't advisable.
I would suggest either to divide the dynamic meta data into different tables per case, and keep a type_id in the main table, or keep columns for all options that are set to NULL by default. Really depends if there is a simple division or is this really dynamic and changing over time. In case you're diving the data into several tables, a JOIN according to type_id will give the ability to query by those specific fields values. Be sure to create an index in both tables on the mutual id.

How can I search all of the databases on my mysql server for a single string of information

I have around 150 different databases, with dozens of tables each on one of my servers. I am looking to see which database contains a specific person's name. Right now, i'm using phpmyadmin to search each database indvidually, but I would really like to be able to search all databases and all tables at once. Is this possible? How would I go about doing this?
A solution would be to use the information_schema database, to list all database, all tables, all fields, and loop over all that...
There is this script that could help for at least some part of the work : anywhereindb (quoting) :
This code is search all the tables and
all the rows and columns in a MYSQL
Database. The code is written in PHP.
For faster result, we are only
searching in the varchar field.
But, as Harmen noted, this only works with one database -- which means you'd have to wrap something arround it, to loop over each database on your server.
For more informations about that, take a look at Chapter 19. INFORMATION_SCHEMA Tables ; especially, the SCHEMATA table, which contains the name of all databases on the server.
Here's another solution, based on a stored procedure -- which means less client/server calls, which might make it faster : http://kedar.nitty-witty.com/miscpages/mysql-search-through-all-database-tables-columns-stored-procedure.php
The right way to go about it would be to NORMALIZE your data in the first place!!!
You say name - but most people have at least 2 names (a surname and a forename) are these split up or in the same field? If they are in the same field, then what order do they appear in? how are they capitalized?
The most efficient way to try to identify where the data might be would be to write a program in C which sifts the raw data files (while the DBMS is shut down) looking for the data - but that will only tell you what table they apppear in.
Failing that you need to write some PHP which iterates through each database ('SHOW databases' works much like a select statement), then iterates through each table in the database, then generates a SELECT statement filtering on each CHAR or VARCHAR column large enough to hold the name you are looking for (try running 'DESC $table').
Good luck.
C.
The best answer probably depends on how often you want to do this. If it is ad-hoc once a week type stuff then the above answers are good.
If you want to do this kind of search once a second, maybe create a "data warehouse" database that contains just the table:columns you want to search (heavily indexed, with a reference back to the source database if that is needed) populated by cron job or by stored procedures driven by changes in the 150 databases...

Categories