How to sort table in phpliteadmin database? - php

i cant use the below code to sort my database permanently:-
ALTER TABLE myTable ORDER BY column DESC;
anyone can help? Thank you in advanced!

It sounds like you are trying to create an index oriented table (from the SQL Server world this would be a table clustered on an index, and in MySQL, it would be the primary key on an InnoDB table).
SQLite does not support such a feature. You cannot permanently set a logical access order to the table. What you can do is set various secondary indexes which are themselves ordered to provide that sort of ordered access to the data.
However, keep in mind that a logical order index scan over the whole table is usually slower than scanning the whole table and sorting, so it may or may not solve any performance problems.

Related

MySql - how can you create a unique constraint on a combination of two values in two columns

I have a problem with creating index described in answer for this question: sql unique constraint on a 2 columns combination
I am using MySql, and I received syntax error, my version of this query is as follows:
CREATE UNIQUE INDEX ON friends (LEAST(userID, friendID), GREATEST(userID, friendID));
LEAST and GREATEST functions are available in MySql, but maybe the syntax should be different?
I tried to make an ALTER TABLE version, but it does not worked as well.
In MySQL, you can't use functions as the values for indexes.
The documentation does not explicitly state this, however, it is a basic characteristic of an index to only support "fixed" data:
Indexes are used to find rows with specific column values quickly. Without an index, MySQL must begin with the first row and then read through the entire table to find the relevant rows.
Generally, this "fixed" data is an individual column/field; with string-fields (such as varchar or text) you can have a prefix-index and not the entire column. Check out CREATE INDEX for more info on that.
The unique index that you're trying to create in you example will have a single record ever; that's not really a beneficial index since it doesn't help for searching the entire table. However, if you index your table on userID, friendID, using the LEAST() and GREATEST() functions in a SELECT statement will be optimized thanks to the index itself, so it may be what you're after in this case.

mysql alter table order by not working

I have searched over net a lot. What I could understand is that this thing has been faced by many people before me and it has also been filed as mysql bugs. But I couldn't find any solution to this. The problem is just that I can't get this command working-
alter table areas order by area_name;
I get this warning-
ORDER BY ignored as there is a user-defined clustered index in the table 'areas'
I just want to sort the table on the basis of 'area_name', that is, names of areas. Just to add, I am trying to do this in the database of my laravel app.
If the db engine is InnoDB, then you can't do this.
From the doc:
ORDER BY does not make sense for InnoDB tables because InnoDB always
orders table rows according to the clustered index.

Splitting data into two tables

I want to create a table with this info:
ID bigint(20) PK AI
FID bigint(20) unique
points int(10) index
birthday date index
current_city varchar(175) index
current_country varchar(100) index
home_city varchar(175) index
home_country varchar(100) index
Engine = MyISAM
On school I learned: create 2 extra tables, one with cities and one with countries and FK to that table when inserting data. The reason I doubt is:
This table will have around 10M inserts an hour. I'm afraid if I Insert a row and have to lookup the city FK and country FK every insert, I might lose a lot of speed? And is this worth the gain I get when I am selecting rows which only happens with WHERE ID = id. there will be around 25M of those selects an hour.
Premature optimization if the root of all evil. Design cleanly first, and optimize next, when you have actual performance data.
A clean design would be a properly normalized table, i.e. with separate city and a country tables.
I'm afraid if I Insert a row and have to lookup the city FK and country FK every insert, I might lose a lot of speed?
Actually, inserting just small IDs instead of raw country/city names in a varchar column may be more efficient:
This will result in less disk writes
You have a MyISAM table; so it doesn't have FK support, and doesn't do any foreign key lookup / check
Replacing the varchar columns with integers will put the table in fixed-length rows format, which may be faster than the dynamic length format
Benchmark with real data/workload, and see if de-normalizing is really worth it.
There's a reason why db normalization exists.
Use a table for cities, one for countries and join them with your master table via FK's.
Also, what country do you know having 100 chars in the name?
What city do you know having 175 chars in the name?
ID can be bigint, but are you sure you need a BIGINT(20), wouldn't a INT(11) suffice ? Anyway, AUTOINCREMENT it and don't UNIQUE it, it doesn't make any sense.
Also, you have indexes on every column, but no composite index. This is wrong for so many reasons. Do not pre-index, but index depending on your queries. Use explain to see what's to be indexed.
Also, don't be afraid to use composite indexes and avoid creating indexes for every column that you have.
Do all the above steps and you will have fast queries (let's hope at least)
The City and Country tables will be small (relatively) and probably fit nice in memory so lookups will be fast.
If that isn't fast enough try to cache the lookup client side (ie your php-app).
Since your rows will be smaller (int instead of varchar) you can fit more rows on each page making index lookups faster.
Try to do it normalized first, it will probably be fast enough.
And make sure you use InnoDB instead of MyISAM. It has much better locking and your application looks very concurrent.

Purpose of Secondary Key

What is the purpose of the Secondary key? Say I have a table that logs down all the check-ins (similar to Foursquare), with columns id, user_id, location_id, post, time, and there can be millions of rows, many people have stated to use secondary keys to speed up the process.
Why does this work? And should both user_id and location_id be secondary keys?
I'm using mySQL btw...
Edit: There will be a page that lists/calculates all the check-ins for a particular user, and another page that lists all the users who has checked-in to a particular location
mySQL Query
Type 1
SELECT location_id FROM checkin WHERE user_id = 1234
SELECT user_id FROM checkin WHERE location_id = 4321
Type 2
SELECT COUNT(location_id) as num_users FROM checkin
SELECT COUNT(user_id) as num_checkins FROM checkin
The key (also called index) is for speeding up queries. If you want to see all checkins for a given user, you need a key on user_id field. If you want to see all checking for a given location, you need index on location_id field. You can read more at mysql documentation
I want to comment on your question and your examples.
Let me just suggest strongly to you that since you are using MySQL you make sure that your tables are using the innodb engine type for many reasons you can research on your own.
One important feature of InnoDB is that you have referential integrity. What does that mean? In your checkin table, you have a foreign key of user_id which is the primary key of the user table. With referential integrity, MySQL will not let you insert a row with a user_id that doesn't exist in the user table. Using MyISAM, you can. That alone should be enough to make you want to use the innodb engine.
To your question about keys/indexes, essentially when a table is defined and a key is declared for a column or some combination of columns, mysql will create an index.
Indexes are essential for performance as a table grows with the insert of rows.
All relational databases and Document databases depend on an implementation of BTree indexing. What Btree's are very good for, is finding an item (or not) using a predictable number of lookups. So when people talk about the performance of a relational database the essential building block of that is use of btree indexes, which are created via KEY statements or with alter table or create index statements.
To understand why this is, imagine that your user table was simply a text file, with one line per row, perhaps separated by commas. As you add a row, a new line in the text file gets added at the bottom.
Eventually you get to the point that you have 10,000 lines in the file.
Now you want to find out if you entered a line for one particular person with the last name of Smith. How can you find that out?
Without any sort of sortation of the file, or a separate index, you have but one option and that is to start at the first line in the file and scan through every line in the table looking for a match. Even if you found a Smith, that might not be the only 'Smith' in the table, so you have to read the entire file from top to bottom every time you want do do this search.
Obviously as the table grows the performance of searching gets worse and worse.
In relational database parlance, this is known as a "table scan". The database has to start at the first row and scan through reading every row until it gets to the end.
Without indexes, relational databases still work, but they are highly dependent on IO performance.
With a Btree index, the rows you want to find are found in the index first. The indexes have a pointer directly to the data you want, so the table no longer needs to be scanned, but instead the individual data pages required are read. This is how a database can maintain adequate performance even when there are millions or 10's or 100's of millions of rows.
To really start to gain insight into how mysql works, you need to get familiar with EXPLAIN EXTENDED ... and start looking at the explain plans for queries. Simple ones like those you've provided will have simple plans that show you how many rows are being examined to get a result and whether or not they are using one or more indexes.
For your summary queries, indexes are not helpful because you are doing a COUNT(). The table will need to be scanned when you have no other criteria constraining the search.
I did notice what looks like a mistake in your summary queries. Just based on your labels, I would think that these are the right queries to get what you would want given your column alias names.
SELECT COUNT(DISTINCT user_id) as num_users FROM checkin
SELECT COUNT(*) as num_checkins FROM checkin
This is yet another reason to use InnoDB, which when properly configured has a data cache (innodb buffer pool) similar to other rdbms's like oracle and sql server. MyISAM doesn't cache data at all, so if you are repeatedly querying the same sorts of queries that might require a lot of IO, MySQL will have to do all that data reading work over and over, whereas with InnoDB, that data could very well be sitting in cache memory and have the result returned without having to go back and read from storage.
Primary vs Secondary
There really is no such concept internally. A Primary key is special because it allows the database to find one single row. Primary keys must be unique, and to reflect that, the associated Btree index is unique, which simply means that it will not allow you to have 2 keys with the same data to exist in the index.
Whether or not an index is unique is an excellent tool that allows you to maintain the consistency of your database in many other cases. Let's say you have an 'employee' table with the SS_Number column to store social security #. It makes sense to have an index on that column if you want the system to support finding an employee by SS number. Without an index, you will tablescan. But you also want to have that index be unique, so that once an employee with a SS# is inserted, there is no way the database will let you enter a duplicate employee with the same SS#.
But to demystify this for you, when you declare keys these indexes are just being created for you and used automagically in most cases, when you define the tables.
It's when you aren't dealing with keys (primary or foreign) as in the example of usernames, first, last & last names, ss#'s etc., that you need to also be aware of how to create an index because you are searching (using where clause criteria) on one or more columns that aren't keys.

MySql using primary index instead of multiple column one!

Ok so I've a SQL query here:
SELECT a.id,... FROM article AS a WHERE a.type=1 AND a.id=3765 ORDER BY a.datetime DESC LIMIT 1
I wanted to get exact article by country and id and created for that index with two columns type and id. Id is also primary key.
I used the EXPLAIN keyword to see which index is used and instead of the multiple column index it used primary key index, but I did set the where stuff exactly in order as the index is created.
Does MySQL use the primary key index instead of the multiple column index because the primary one is faster? Or should I force MySql to use the multiple column index?
P.S. Just noticed it was stupid to use order when there is 1 result row. Haha. It increased the search time for 0.0001 seconds. :P
I don'e KNOW, but I would THINK that the primary key index would be the fastest available. And if it is, there's not much use using any other index. You're either going to have a article with an id of 3765 or you're not. Scanning that single row to determine if the type matches is trivial.
If you're only returning one row, there's no point to your ORDER BY clause. And the only point to the a.type=1 is to reject an article with the right id if the type is not correct.
MySQL allows for up to 32 indexes for each table, and each index can incorporate up to 16 columns. A multiple-column / composite index is considered a sorted array containing values that are created by concatenating the values of the indexed columns. MySQL uses multiple-column indexes in such a way that queries are fast when you specify a known quantity for the first column of the index in a WHERE clause, even if you do not specify values for the other columns.
If you look very carefully in how MySQL uses indexes, you will find that indexes are used to find rows with specific column values quickly. Without an index, MySQL must begin with the first row and then read through the entire table to find the relevant rows.
In MySQL, a primary key column is automatically indexed for efficiency, as they use the in-built AUTO_INCREMENT feature of MySQL. On the other hand, one should not go overboard with indexing. While it does improve the speed of reading from databases, it slows down the process of altering data in a database (because the changes need to be recorded in the index). Indexes are best used on columns:-
that are frequently used in the WHERE part of a query
that are frequently used in an ORDER BY part of a query
that have many different values (columns with numerous repeating values ought not to be indexed).
So I try to use the primary key if my queries can suffice its use. When & only when it is required for more such indexing & fastness of fetching records, do I use the composite indexes.
Hope it helps.
The primary key is unique, so there's no need for MySQL to check any other index. a.id=3765 guarantees that there will be no more than one row returned. If a.type=1 is false for that row, then nothing will be returned.

Categories