MySQL ENUM VS INT - php

I have few tables that have columns that can either be ENUM type or INT type. I tend to always use integer type assuming that it will be faster to perform search based on it.
For example one of my table has a column: StatusType which can have only 4 possible values: Completed, In Progress, Failed, Todo.
Instead of storing above as ENUM strings I store them as:
1, 2, 3, 4 respectively. And then in my PHP code I have constant variables that define these values like this:
define('COMPLETED', 1);
define('IN_PROGRESS', 2);
define('FAILED', 3);
define('TODO', 4);
Now my question is, am I doing it right way or I should just change it to be ENUM type and use strings to compare in queries? I have many other columns that can only have set of max 4-5 possible values.

Enum values look really cool in MySQL, yet I am not a fan of them. They are limited to 255 values, so if you decide to add more values, then you might run into a limit. Also, as you describe, you need to synchronize the values in your application code with the values in the database -- something that seems potentially dangerous.
In addition, they make certain future changes more difficult. For instance, other databases do not support enums. And, if you want to add multi-lingual support, having codes embedded in data type definitions in the database is a bit hard to deal with.
The more standard method is one or more reference tables, where you use join to get the values. You can use a hybrid approach where you use a reference table in the database. Then you can load the reference table into the application to get the mapping from numbers to strings so you can avoid the joins in your code.

You are half-correct. Enum is very bad from a performance perspective: MySQL Enum performance advantage?
That said, binding the definitions of the INTs to your code is also not a great thing. Ideally, if you were to follow the correct Data Normalization patterns, you would define the values of the INTs in the Database as well, in another table, and use the index of the definition as the value for the assignment.
See: http://en.wikipedia.org/wiki/Database_normalization#Normal_forms
The reason for this is so the data is portable, and useful without requiring the Codebase to read it (you can easily dump a CSV for Excel by executing a join).
God Speed.
Example SQL:
SELECT *, state.name AS state FROM students
JOIN states ON student.state_id = states.id
Just to get state names.
Or to filter:
SELECT * FROM students
JOIN states ON student.state_id = states.id
WHERE state.name = 'Maine' OR state.code = 'ME'
Yeah, strange example, but the idea is that INTs are TINY, and VARCHAR are... variable... Storing 'Maine' as opposed to '16' adds up over millions of rows. Further, the indexing on INT is MUCH faster than VARCHAR, so your look-ups are going to be much faster. Particularly if you inherently know the number ahead of time and build your query without the JOIN. This is not advisable as a common practice, but could be done if you wanted to make something even faster and you can ensure the validity of the assumed value.

Related

Is where any performance difference between using text and integer & in-app constant "type" column in MySQL & php?

In my case, I have a table which stores a collection of records with similar information but each with unique type column, used in various parts of my application. I know, I know this is "micro-optimisation" but it it an integral part of my application (it will store many records), and I would like it to be optimised, and I am also simply curious, is is faster to use text type and select it like
SELECT ... WHERE type = 'some_type'
or use a PHP defined constant like
const SOMETYPE = 1;
run_query('SELECT ... WHERE type = '.SOMETYPE);
?
String comparison will always be slower than integer comparison. Typically, what is done is the string are stored in a separate table, perhaps called standard_types or whatever makes sense for the "constants" being stored. That table then has a unique id field that can be referenced by other tables.
This way if you need the strings for reporting, the reporting queries can join to the "types" table for the display strings. Ideally, in my opinion, the id values should reflect a standard numbers that can be expressed as enum values or constants in client code; this minimizes the dependence on the "types" table for non-reporting queries.
Some might argue against keeping the list of standard id values and their meanings coordinated across the database and one or more application codebases; but the alternative is coordinating standard strings across all that (domains that can handle those string values quite differently).
The dominant time spent in any query is in fetching rows to work with. Functions, string vs int, etc, make only minor differences in performance. Focus on what is clean for your code, and what minimizes the number of rows touched.
Once you have done that, minimize the number of round trips to the server. Even so, I have created many web pages that do 20-50 queries (each well optimized); the page performance is adequate.
You could also consider the ENUM data type.
sex ENUM('unk', 'male', 'female') NOT NULL
gives you WHERE sex = 'male', implemented as a 1-byte integer under the covers.

Using SELECT * or SELECT all, cols is better for Queries [duplicate]

I've heard that SELECT * is generally bad practice to use when writing SQL commands because it is more efficient to SELECT columns you specifically need.
If I need to SELECT every column in a table, should I use
SELECT * FROM TABLE
or
SELECT column1, colum2, column3, etc. FROM TABLE
Does the efficiency really matter in this case? I'd think SELECT * would be more optimal internally if you really need all of the data, but I'm saying this with no real understanding of database.
I'm curious to know what the best practice is in this case.
UPDATE: I probably should specify that the only situation where I would really want to do a SELECT * is when I'm selecting data from one table where I know all columns will always need to be retrieved, even when new columns are added.
Given the responses I've seen however, this still seems like a bad idea and SELECT * should never be used for a lot more technical reasons that I ever though about.
One reason that selecting specific columns is better is that it raises the probability that SQL Server can access the data from indexes rather than querying the table data.
Here's a post I wrote about it: The real reason select queries are bad index coverage
It's also less fragile to change, since any code that consumes the data will be getting the same data structure regardless of changes you make to the table schema in the future.
Given your specification that you are selecting all columns, there is little difference at this time. Realize, however, that database schemas do change. If you use SELECT * you are going to get any new columns added to the table, even though in all likelihood, your code is not prepared to use or present that new data. This means that you are exposing your system to unexpected performance and functionality changes.
You may be willing to dismiss this as a minor cost, but realize that columns that you don't need still must be:
Read from database
Sent across the network
Marshalled into your process
(for ADO-type technologies) Saved in a data-table in-memory
Ignored and discarded / garbage-collected
Item #1 has many hidden costs including eliminating some potential covering index, causing data-page loads (and server cache thrashing), incurring row / page / table locks that might be otherwise avoided.
Balance this against the potential savings of specifying the columns versus an * and the only potential savings are:
Programmer doesn't need to revisit the SQL to add columns
The network-transport of the SQL is smaller / faster
SQL Server query parse / validation time
SQL Server query plan cache
For item 1, the reality is that you're going to add / change code to use any new column you might add anyway, so it is a wash.
For item 2, the difference is rarely enough to push you into a different packet-size or number of network packets. If you get to the point where SQL statement transmission time is the predominant issue, you probably need to reduce the rate of statements first.
For item 3, there is NO savings as the expansion of the * has to happen anyway, which means consulting the table(s) schema anyway. Realistically, listing the columns will incur the same cost because they have to be validated against the schema. In other words this is a complete wash.
For item 4, when you specify specific columns, your query plan cache could get larger but only if you are dealing with different sets of columns (which is not what you've specified). In this case, you do want different cache entries because you want different plans as needed.
So, this all comes down, because of the way you specified the question, to the issue resiliency in the face of eventual schema modifications. If you're burning this schema into ROM (it happens), then an * is perfectly acceptable.
However, my general guideline is that you should only select the columns you need, which means that sometimes it will look like you are asking for all of them, but DBAs and schema evolution mean that some new columns might appear that could greatly affect the query.
My advice is that you should ALWAYS SELECT specific columns. Remember that you get good at what you do over and over, so just get in the habit of doing it right.
If you are wondering why a schema might change without code changing, think in terms of audit logging, effective/expiration dates and other similar things that get added by DBAs for systemically for compliance issues. Another source of underhanded changes is denormalizations for performance elsewhere in the system or user-defined fields.
You should only select the columns that you need. Even if you need all columns it's still better to list column names so that the sql server does not have to query system table for columns.
Also, your application might break if someone adds columns to the table. Your program will get columns it didn't expect too and it might not know how to process them.
Apart from this if the table has a binary column then the query will be much more slower and use more network resources.
There are four big reasons that select * is a bad thing:
The most significant practical reason is that it forces the user to magically know the order in which columns will be returned. It's better to be explicit, which also protects you against the table changing, which segues nicely into...
If a column name you're using changes, it's better to catch it early (at the point of the SQL call) rather than when you're trying to use the column that no longer exists (or has had its name changed, etc.)
Listing the column names makes your code far more self-documented, and so probably more readable.
If you're transferring over a network (or even if you aren't), columns you don't need are just waste.
Specifying the column list is usually the best option because your application won't be affected if someone adds/inserts a column to the table.
Specifying column names is definitely faster - for the server. But if
performance is not a big issue (for example, this is a website content database with hundreds, maybe thousands - but not millions - of rows in each table); AND
your job is to create many small, similar applications (e.g. public-facing content-managed websites) using a common framework, rather than creating a complex one-off application; AND
flexibility is important (lots of customization of the db schema for each site);
then you're better off sticking with SELECT *. In our framework, heavy use of SELECT * allows us to introduce a new website managed content field to a table, giving it all of the benefits of the CMS (versioning, workflow/approvals, etc.), while only touching the code at a couple of points, instead of a couple dozen points.
I know the DB gurus are going to hate me for this - go ahead, vote me down - but in my world, developer time is scarce and CPU cycles are abundant, so I adjust accordingly what I conserve and what I waste.
SELECT * is a bad practice even if the query is not sent over a network.
Selecting more data than you need makes the query less efficient - the server has to read and transfer extra data, so it takes time and creates unnecessary load on the system (not only the network, as others mentioned, but also disk, CPU etc.). Additionally, the server is unable to optimize the query as well as it might (for example, use covering index for the query).
After some time your table structure might change, so SELECT * will return a different set of columns. So, your application might get a dataset of unexpected structure and break somewhere downstream. Explicitly stating the columns guarantees that you either get a dataset of known structure, or get a clear error on the database level (like 'column not found').
Of course, all this doesn't matter much for a small and simple system.
Lots of good reasons answered here so far, here's another one that hasn't been mentioned.
Explicitly naming the columns will help you with maintenance down the road. At some point you're going to be making changes or troubleshooting, and find yourself asking "where the heck is that column used".
If you've got the names listed explicitly, then finding every reference to that column -- through all your stored procedures, views, etc -- is simple. Just dump a CREATE script for your DB schema, and text search through it.
Performance wise, SELECT with specific columns can be faster (no need to read in all the data). If your query really does use ALL the columns, SELECT with explicit parameters is still preferred. Any speed difference will be basically unnoticeable and near constant-time. One day your schema will change, and this is good insurance to prevent problems due to this.
definitely defining the columns, because SQL Server will not have to do a lookup on the columns to pull them. If you define the columns, then SQL can skip that step.
It's always better to specify the columns you need, if you think about it one time, SQL doesn't have to think "wtf is *" every time you query. On top of that, someone later may add columns to the table that you actually do not need in your query and you'll be better off in that case by specifying all of your columns.
The problem with "select *" is the possibility of bringing data you don't really need. During the actual database query, the selected columns don't really add to the computation. What's really "heavy" is the data transport back to your client, and any column that you don't really need is just wasting network bandwidth and adding to the time you're waiting for you query to return.
Even if you do use all the columns brought from a "select *...", that's just for now. If in the future you change the table/view layout and add more columns, you'll start bring those in your selects even if you don't need them.
Another point in which a "select *" statement is bad is on view creation. If you create a view using "select *" and later add columns to your table, the view definition and the data returned won't match, and you'll need to recompile your views in order for them to work again.
I know that writing a "select *" is tempting, 'cause I really don't like to manually specify all the fields on my queries, but when your system start to evolve, you'll see that it's worth to spend this extra time/effort in specifying the fields rather than spending much more time and effort removing bugs on your views or optimizing your app.
While explicitly listing columns is good for performance, don't get crazy.
So if you use all the data, try SELECT * for simplicity (imagine having many columns and doing a JOIN... query may get awful). Then - measure. Compare with query with column names listed explicitly.
Don't speculate about performance, measure it!
Explicit listing helps most when you have some column containing big data (like body of a post or article), and don't need it in given query. Then by not returning it in your answer DB server can save time, bandwidth, and disk throughput. Your query result will also be smaller, which is good for any query cache.
You should really be selecting only the fields you need, and only the required number, i.e.
SELECT Field1, Field2 FROM SomeTable WHERE --(constraints)
Outside of the database, dynamic queries run the risk of injection attacks and malformed data. Typically you get round this using stored procedures or parameterised queries. Also (although not really that much of a problem) the server has to generate an execution plan each time a dynamic query is executed.
It is NOT faster to use explicit field names versus *, if and only if, you need to get the data for all fields.
Your client software shouldn't depend on the order of the fields returned, so that's a nonsense too.
And it's possible (though unlikely) that you need to get all fields using * because you don't yet know what fields exist (think very dynamic database structure).
Another disadvantage of using explicit field names is that if there are many of them and they're long then it makes reading the code and/or the query log more difficult.
So the rule should be: if you need all the fields, use *, if you need only a subset, name them explicitly.
The result is too huge. It is slow to generate and send the result from the SQL engine to the client.
The client side, being a generic programming environment, is not and should not be designed to filter and process the results (e.g. the WHERE clause, ORDER clause), as the number of rows can be huge (e.g. tens of millions of rows).
Naming each column you expect to get in your application also ensures your application won't break if someone alters the table, as long as your columns are still present (in any order).
Performance wise I have seen comments that both are equal. but usability aspect there are some +'s and -'s
When you use a (select *) in a query and if some one alter the table and add new fields which do not need for the previous query it is an unnecessary overhead. And what if the newly added field is a blob or an image field??? your query response time is going to be really slow then.
In other hand if you use a (select col1,col2,..) and if the table get altered and added new fields and if those fields are needed in the result set, you always need to edit your select query after table alteration.
But I suggest always to use select col1,col2,... in your queries and alter the query if the table get altered later...
This is an old post, but still valid. For reference, I have a very complicated query consisting of:
12 tables
6 Left joins
9 inner joins
108 total columns on all 12 tables
I only need 54 columns
A 4 column Order By clause
When I execute the query using Select *, it takes an average of 2869ms.
When I execute the query using Select , it takes an average of 1513ms.
Total rows returned is 13,949.
There is no doubt selecting column names means faster performance over Select *
Select is equally efficient (in terms of velocity) if you use * or columns.
The difference is about memory, not velocity. When you select several columns SQL Server must allocate memory space to serve you the query, including all data for all the columns that you've requested, even if you're only using one of them.
What does matter in terms of performance is the excecution plan which in turn depends heavily on your WHERE clause and the number of JOIN, OUTER JOIN, etc ...
For your question just use SELECT *. If you need all the columns there's no performance difference.
It depends on the version of your DB server, but modern versions of SQL can cache the plan either way. I'd say go with whatever is most maintainable with your data access code.
One reason it's better practice to spell out exactly which columns you want is because of possible future changes in the table structure.
If you are reading in data manually using an index based approach to populate a data structure with the results of your query, then in the future when you add/remove a column you will have headaches trying to figure out what went wrong.
As to what is faster, I'll defer to others for their expertise.
As with most problems, it depends on what you want to achieve. If you want to create a db grid that will allow all columns in any table, then "Select *" is the answer. However, if you will only need certain columns and adding or deleting columns from the query is done infrequently, then specify them individually.
It also depends on the amount of data you want to transfer from the server. If one of the columns is a defined as memo, graphic, blob, etc. and you don't need that column, you'd better not use "Select *" or you'll get a whole bunch of data you don't want and your performance could suffer.
To add on to what everyone else has said, if all of your columns that you are selecting are included in an index, your result set will be pulled from the index instead of looking up additional data from SQL.
SELECT * is necessary if one wants to obtain metadata such as the number of columns.
Gonna get slammed for this, but I do a select * because almost all my data is retrived from SQL Server Views that precombine needed values from multiple tables into a single easy to access View.
I do then want all the columns from the view which won't change when new fields are added to underlying tables. This has the added benefit of allowing me to change where data comes from. FieldA in the View may at one time be calculated and then I may change it to be static. Either way the View supplies FieldA to me.
The beauty of this is that it allows my data layer to get datasets. It then passes them to my BL which can then create objects from them. My main app only knows and interacts with the objects. I even allow my objects to self-create when passed a datarow.
Of course, I'm the only developer, so that helps too :)
What everyone above said, plus:
If you're striving for readable maintainable code, doing something like:
SELECT foo, bar FROM widgets;
is instantly readable and shows intent. If you make that call you know what you're getting back. If widgets only has foo and bar columns, then selecting * means you still have to think about what you're getting back, confirm the order is mapped correctly, etc. However, if widgets has more columns but you're only interested in foo and bar, then your code gets messy when you query for a wildcard and then only use some of what's returned.
And remember if you have an inner join by definition you do not need all the columns as the data in the join columns is repeated.
It's not like listing columns in SQl server is hard or even time-consuming. You just drag them over from the object browser (you can get all in one go by dragging from the word columns). To put a permanent performance hit on your system (becasue this can reduce the use of indexes and becasue sending unneeded data over the network is costly) and make it more likely that you will have unexpected problems as the database changes (sometimes columns get added that you do not want the user to see for instance) just to save less than a minute of development time is short-sighted and unprofessional.
Absolutely define the columns you want to SELECT every time. There is no reason not to and the performance improvement is well worth it.
They should never have given the option to "SELECT *"
If you need every column then just use SELECT * but remember that the order could potentially change so when you are consuming the results access them by name and not by index.
I would ignore comments about how * needs to go get the list - chances are parsing and validating named columns is equal to the processing time if not more. Don't prematurely optimize ;-)

MySQL Fields alternative to 'is_active', 'is_banned' flags?

I've encountered a table has ~20 fields of 'is_X' (is_active, is_banned, is_allowed_to_view_something and so on) and it seems just plain wrong.
I am familiar with the bitwise method by storing an INT in one field and then breaking it to bits and using it as flags but is there any other way to store a lot of information (most of it is yes/no) in a MySQL table without cluttering the table with tons of fields?
It is good practice to use statuses column. instead of single column for every status. It can be done by serializing object with user's statuses or just simply saving JSON.
Benefits:
easy to manage
your table gets smaller
Bit columns are usefull because easy to use.
An other way could be a right table (I assume you are working on a user table)
user <=> user_has_right <=> right
Table right is supposed to store multiple rows which are active, banned, etc ...
Basically, user_has_right has 2 foreign keys : fk_id_user and fk_id_right.
If an entry exist, then the user has the right to ...
Example :
Suppose you want all active users.
Suppose id_right for active users is 1.
SELECT * FROM user u
INNER JOIN user_has_right uhr ON uhr.fk_id_user = u.id_user
INNER JOIN rights r ON r.id_right = uhr.fk_id_rights
WHERE r.id_right = 1;
MySQL does have a BIT type, but all the bit functions and operators currently require BIGINT (64-bit) arguments and returns. If you can live with the cast to/from BIGINT, you can use any sufficiently large (for your application) integer type and use bitmasks with bitwise ors and ands to set and clear (respectively) individual bits.
Clearly, the intended semantic of each bit-position is not explicit, and so less clear than individual is_A, is_B, ... columns, but you might be able to ameliorate that somewhat with a table of set/clear bitmasks.
It depends on what you're doing, how you use the fields, etc. and (of course) on your own particular preferences, biases, etc. on where the balance lies and whether one is better than the other for a particular situation.
Also, if you Google around, and you'll find that there may be some issues and bugs with BIT in older versions (around 5.0.x) of MySQL.

Speed of SELECT Distinct vs array unique

I am using WordPress with some custom post types (just to give a description of my DB structure - its WP's).
Each post has custom meta, which is stored in a separate table (postmeta table). In my case, I am storing city and state.
I've added some actions to WP's save_post/trash_post hooks so that the city and state are also stored in a separate table (cities) like so:
ID postID city state
auto int varchar varchar
I did this because I assumed that this table would be faster than querying the rather large postmeta table for a list of available cities and states.
My logic also forced me to add/update cities and states for every post, even though this will cause duplicates (in the city/state fields). This must be so because I must keep track of which states/cities exist (actually have a post associated with them). When a post is added or deleted, it takes its record to or from the cities table with it.
This brings me to my question(s).
Does this logic make sense or do I suck at DB design?
If it does make sense, my real question is this: **would it be faster to use MySQL's "SELECT DISTINCT" or just "SELECT *" and then use PHP's array_unique on the results?**
Edits for comments/answers thus far:
The structure of the table is exactly how I typed it out above. There is an index on ID, but the point of this table isn't to retrieve an indexed list, but to retrieve ALL results (that are unique) for a list of ALL available city/state combos.
I think I may go with (I don't know why I didn't think of this before) just adding a serialized list of city/state combos in ONE record in the wp_options table. Then I can just get that record, and filter out the unique records I need.
Can I get some feedback on this? I would imagine that retrieving and filtering a serialized array would be faster than storing the data in a separate table for retrieval.
To answer your question about using SELECT distinct vs. array_unique, I would say that I would almost always prefer to limit the result set in the database assuming of course that you have an appropriate index on the field for which you are trying to get distinct values. This saves you time in transmitting extra data from DB to application and for the application reading that data into memory where you can work with it.
As far as your separate table design, it is hard to speculate whether this is a good approach or not, this would largely depend on how you are actually preforming your query (i.e. are you doing two separate queries - one for post info and one for city/state info or querying across a join?).
The is really only one definitive way to determine what is fastest approach. That is to test both ways in your environment.
1) Fully normalized table(when it have only integer values and other tables have only one int+varchar) have advantage when you not dooing full table joins often and dooing alot of search on normalized fields. As downside it require large join/sort buffers and result more complex queries=much less chance query will be auto-optimized by mysql. So you have optimize your queries yourself.
2)Select distinct will be faster in almost any cases. Only case when it will be slower - you have low size sort buffer in /etc/my.conf and much more size memory buffer for php.
Distinct select can use indexes, while your code can't.
Also sending large amount of data to your app require alot of mysql cpu time and real time.

Normalized table structure in MySql... Sort of?

I am wondering what thoughts are on the following table structures for MySQL.
I have a relationship between exercises and exercise parameters, where a single exercise can have mutliple parameters.
For example, the exercise 'sit-ups' could have the parameters 'sets' & 'reps'.
All exercises start with a default set of parameters. For example: sets, reps, weight, hold & rest.
This list is fully customizable. Users can add parameters, remove parameters, or rename them, for each exercise in the database.
To express this relationship, I have the following one-to-many structure:
TABLE exercises
ID
Name
Table exerciseParameters
ID
exerciseID -> exercises(ID)
Name
What is concerning me, is that I am noticing that even though users have the option to rename / customize parameters, a lot of the time they dont. So my exerciseParameters table is filling up with repeat words like "Sets" & "Reps" quite a bit.
Is there a better way something like this should be organized, to avoid so much repetition? (Bearing in mind that the names of the parameters have to be user-customizable. For example "Reps" might get changed to "Hard Reps" by the user.) (Or am I making a big deal out of nothing, and this is ok as is?)
Thanks, in advance, for your help.
Unless you are dealing with millions of rows, I'd leave the structure as it is. It is straightforward and easy to query.
If you are dealing with millions of rows and you have measured the storage impact and deem it unacceptable, then you have couple of options (not necessarily mutually exclusive):
Don't store the defaults
If a parameter is not present in exerciseParameters simply assume it has a default value. The actual defaults can be stored in a separate table or outside the database altogether (depending on your querying needs).
If user changes the default parameter, store it in exerciseParameters.
If user deletes the default parameter, represent it as an exerciseParameters row containing a NULL value.
If user restores the default parameter to its original value, remove it from exerciseParameters.
This exploits the assumption that there will be many more unchanged than either edited or deleted defaults. The cost is in increased complexity (in both modification and querying) and potentially performance.
Reorganize you data model
So names (and values) are stored only once, making the repetitions cheaper. For example:
ParameterNameID and ParameterValueID are integers, so each repetition in exerciseParameters is much cheaper (storage-wise) than if they were strings. OTOH, you loose simplicity and potentially pay a price in querying performance (more JOINs needed).
Use a different DBMS
A one that supports clustering and leading-edge index compression (for example, Oracle's ORGANIZATION INDEX COMPRESS table can greatly diminish storage impact of repeated values).
You could add up another table defaultExerciseParams with the default parameters and values. Whenever a user decides to override any of those - remove the param from this table and push it into the Table exerciseParameters

Categories