i cant use the below code to sort my database permanently:-
ALTER TABLE myTable ORDER BY column DESC;
anyone can help? Thank you in advanced!
It sounds like you are trying to create an index oriented table (from the SQL Server world this would be a table clustered on an index, and in MySQL, it would be the primary key on an InnoDB table).
SQLite does not support such a feature. You cannot permanently set a logical access order to the table. What you can do is set various secondary indexes which are themselves ordered to provide that sort of ordered access to the data.
However, keep in mind that a logical order index scan over the whole table is usually slower than scanning the whole table and sorting, so it may or may not solve any performance problems.
We have a database full of MyISAM tables. The thing is that we need transactions on just 1 table, called payments My question is, will that work? I mean by changing the engine of that table to InnoDB and using transactions on php, will that do the work? or do i have to mess around and do more than just that? Will that affect my db in anyway? The table is isolated, it doesn't have foreign keys since MyISAM doesn't support them.
Ty in advance.
As documented under Storage Engines:
It is important to remember that you are not restricted to using the same storage engine for an entire server or schema: you can use a different storage engine for each table in your schema.
So yes, your proposal will work (provided that you only wish to attain ACID compliance on the payments table, of course).
I am simulating several instruction queues using a mysql table. There is a 'mode' column which is the name for each queue and once items are taken from the queue they are deleted right afterwards. Typical queries look like
SELECT * FROM queue_table WHERE mode='queue1' LIMIT 50.
I am currently using a MYISAM table for this but there is a lot of overhead with all the deleting and optimization takes a long time. I was just wondering if there was a more efficient way to do this and if maybe the database should be INNODB.
InnoDB are useful if your are implying Foreign key constraints .
so one thing for optimizing you Query..
create index onmode column and also dont use * ( may be your table have a lots of columns), write column name which you only need to retrive.
I cannot say I completely understand your problem but I am sure that if you will make your fileds of fixed size (i.e. no text fields, char instead of varchar) the table records become extremely reusable and require no optimization at all.
I have a table for posts as ID, Title, Content, etc. I just added a column as counter. It is a simple counter of visits, as every time it will be updated as $i+1. In this method, I update the table on every visit, just after reading the row (in a single mysql session).
Is it a good idea to separate the counter as another table connected with the post table with ID as foreign key? In this method, updating a lighter table is faster, but every time I need to read two tables (showing the post and its statistics).
So, my answer is it depends on what you are trying to do. I'm going to give you some behind the scene info on MySQL so you can decide when it makes sense to and not to create a counter table.
MySQL has different table engines. When you create a new table, you specify which engine to use, the more common ones are MyISAM and Innodb. MyISAM is really good and fast at doing selects and Innodb is really fast at doing inserts. There are other differences between both but it's good to understand that the engine you select is important.
Based on the above, if you have a table that is usually read and has a ton of rows, it might make sense to keep that table as MyISAM and create a separate counter table using Innodb that that keeps getting updated. If you are implementing cache, this is another reason which this model would work better since you won't clear your cache for your table data every time you update the counter since you would be updating the counter on a different table.
Now, some may argue that you should using Innodb because it has many more benefits but there are replication strategies that would allow you to make the best of both worlds.
I hope this gives you a general understanding so you can then dig deeper and find your answer. More info at: http://www.mysqlperformanceblog.com/2007/07/01/implementing-efficient-counters-with-mysql/
What is the purpose of the Secondary key? Say I have a table that logs down all the check-ins (similar to Foursquare), with columns id, user_id, location_id, post, time, and there can be millions of rows, many people have stated to use secondary keys to speed up the process.
Why does this work? And should both user_id and location_id be secondary keys?
I'm using mySQL btw...
Edit: There will be a page that lists/calculates all the check-ins for a particular user, and another page that lists all the users who has checked-in to a particular location
mySQL Query
Type 1
SELECT location_id FROM checkin WHERE user_id = 1234
SELECT user_id FROM checkin WHERE location_id = 4321
Type 2
SELECT COUNT(location_id) as num_users FROM checkin
SELECT COUNT(user_id) as num_checkins FROM checkin
The key (also called index) is for speeding up queries. If you want to see all checkins for a given user, you need a key on user_id field. If you want to see all checking for a given location, you need index on location_id field. You can read more at mysql documentation
I want to comment on your question and your examples.
Let me just suggest strongly to you that since you are using MySQL you make sure that your tables are using the innodb engine type for many reasons you can research on your own.
One important feature of InnoDB is that you have referential integrity. What does that mean? In your checkin table, you have a foreign key of user_id which is the primary key of the user table. With referential integrity, MySQL will not let you insert a row with a user_id that doesn't exist in the user table. Using MyISAM, you can. That alone should be enough to make you want to use the innodb engine.
To your question about keys/indexes, essentially when a table is defined and a key is declared for a column or some combination of columns, mysql will create an index.
Indexes are essential for performance as a table grows with the insert of rows.
All relational databases and Document databases depend on an implementation of BTree indexing. What Btree's are very good for, is finding an item (or not) using a predictable number of lookups. So when people talk about the performance of a relational database the essential building block of that is use of btree indexes, which are created via KEY statements or with alter table or create index statements.
To understand why this is, imagine that your user table was simply a text file, with one line per row, perhaps separated by commas. As you add a row, a new line in the text file gets added at the bottom.
Eventually you get to the point that you have 10,000 lines in the file.
Now you want to find out if you entered a line for one particular person with the last name of Smith. How can you find that out?
Without any sort of sortation of the file, or a separate index, you have but one option and that is to start at the first line in the file and scan through every line in the table looking for a match. Even if you found a Smith, that might not be the only 'Smith' in the table, so you have to read the entire file from top to bottom every time you want do do this search.
Obviously as the table grows the performance of searching gets worse and worse.
In relational database parlance, this is known as a "table scan". The database has to start at the first row and scan through reading every row until it gets to the end.
Without indexes, relational databases still work, but they are highly dependent on IO performance.
With a Btree index, the rows you want to find are found in the index first. The indexes have a pointer directly to the data you want, so the table no longer needs to be scanned, but instead the individual data pages required are read. This is how a database can maintain adequate performance even when there are millions or 10's or 100's of millions of rows.
To really start to gain insight into how mysql works, you need to get familiar with EXPLAIN EXTENDED ... and start looking at the explain plans for queries. Simple ones like those you've provided will have simple plans that show you how many rows are being examined to get a result and whether or not they are using one or more indexes.
For your summary queries, indexes are not helpful because you are doing a COUNT(). The table will need to be scanned when you have no other criteria constraining the search.
I did notice what looks like a mistake in your summary queries. Just based on your labels, I would think that these are the right queries to get what you would want given your column alias names.
SELECT COUNT(DISTINCT user_id) as num_users FROM checkin
SELECT COUNT(*) as num_checkins FROM checkin
This is yet another reason to use InnoDB, which when properly configured has a data cache (innodb buffer pool) similar to other rdbms's like oracle and sql server. MyISAM doesn't cache data at all, so if you are repeatedly querying the same sorts of queries that might require a lot of IO, MySQL will have to do all that data reading work over and over, whereas with InnoDB, that data could very well be sitting in cache memory and have the result returned without having to go back and read from storage.
Primary vs Secondary
There really is no such concept internally. A Primary key is special because it allows the database to find one single row. Primary keys must be unique, and to reflect that, the associated Btree index is unique, which simply means that it will not allow you to have 2 keys with the same data to exist in the index.
Whether or not an index is unique is an excellent tool that allows you to maintain the consistency of your database in many other cases. Let's say you have an 'employee' table with the SS_Number column to store social security #. It makes sense to have an index on that column if you want the system to support finding an employee by SS number. Without an index, you will tablescan. But you also want to have that index be unique, so that once an employee with a SS# is inserted, there is no way the database will let you enter a duplicate employee with the same SS#.
But to demystify this for you, when you declare keys these indexes are just being created for you and used automagically in most cases, when you define the tables.
It's when you aren't dealing with keys (primary or foreign) as in the example of usernames, first, last & last names, ss#'s etc., that you need to also be aware of how to create an index because you are searching (using where clause criteria) on one or more columns that aren't keys.