I am developing one social chatting application. In my app having 5000 users. I want to fetch username which was last 1 hour in online.
I have two tables users and messages. My database is very heavy. users table having 4983 records and messages table having approximately 15 millions records. I want to show 20 users which user sending message between last 1 hour.
My Query -
SELECT a.username,a.id FROM users a JOIN messages b
WHERE a.id != ".$getUser['id']." AND
a.is_active=1 AND
a.is_online=1 AND
a.id=b.user_id AND
b.created > DATE_SUB(NOW(), INTERVAL 1 HOUR)
GROUP BY b.user_id
ORDER BY b.id DESC LIMIT 20
Users Table -
Messages Table -
Above query working fine. But my query is getting too much slow. And some times page hanged out. I want to get faster record.
Note - $getUser['id'] is login user id.
Any idea?
You can use indexes
A database index is a data structure that improves the speed of
operations in a table. Indexes can be created using one or more
columns, providing the basis for both rapid random lookups and
efficient ordering of access to records.
While creating index, it should be considered that what are the
columns which will be used to make SQL queries and create one or more
indexes on those columns.
Practically, indexes are also type of tables, which keep primary key
or index field and a pointer to each record into the actual table.
The users cannot see the indexes, they are just used to speed up
queries and will be used by Database Search Engine to locate records
very fast.
INSERT and UPDATE statements take more time on tables having indexes
where as SELECT statements become fast on those tables. The reason is
that while doing insert or update, database need to insert or update
index values as well.
Simple and Unique Index:
You can create a unique index on a table. A unique index means that two rows cannot have the same index value. Here is the syntax to create an Index on a table
CREATE UNIQUE INDEX index_name
ON table_name ( column1, column2,...);
You can use one or more columns to create an index. For example, we can create an index on tutorials_tbl using tutorial_author.
CREATE UNIQUE INDEX AUTHOR_INDEX
ON tutorials_tbl (tutorial_author)
You can create a simple index on a table. Just omit UNIQUE keyword from the query to create simple index. Simple index allows duplicate values in a table.
If you want to index the values in a column in descending order, you can add the reserved word DESC after the column name.
mysql> CREATE UNIQUE INDEX AUTHOR_INDEX
ON tutorials_tbl (tutorial_author DESC)
ALTER command to add and drop INDEX:
There are four types of statements for adding indexes to a table:
ALTER TABLE tbl_name ADD PRIMARY KEY (column_list):
This statement adds a PRIMARY KEY, which means that indexed values must be unique and cannot be NULL.
ALTER TABLE tbl_name ADD UNIQUE index_name (column_list):
This statement creates an index for which values must be unique (with the exception of NULL values, which may appear multiple times).
ALTER TABLE tbl_name ADD INDEX index_name (column_list):
This adds an ordinary index in which any value may appear more than once.
ALTER TABLE tbl_name ADD FULLTEXT index_name (column_list):
This creates a special FULLTEXT index that is used for text-searching purposes.
Here is the example to add index in an existing table.
mysql> ALTER TABLE testalter_tbl ADD INDEX (c);
You can drop any INDEX by using DROP clause along with ALTER command. Try out the following example to drop above-created index.
mysql> ALTER TABLE testalter_tbl DROP INDEX (c);
You can drop any INDEX by using DROP clause along with ALTER command. Try out the following example to drop above-created index.
ALTER Command to add and drop PRIMARY KEY:
You can add primary key as well in the same way. But make sure Primary Key works on columns, which are NOT NULL.
Here is the example to add primary key in an existing table. This will make a column NOT NULL first and then add it as a primary key.
mysql> ALTER TABLE testalter_tbl MODIFY i INT NOT NULL;
mysql> ALTER TABLE testalter_tbl ADD PRIMARY KEY (i);
You can use ALTER command to drop a primary key as follows:
mysql> ALTER TABLE testalter_tbl DROP PRIMARY KEY;
To drop an index that is not a PRIMARY KEY, you must specify the index name.
Displaying INDEX Information:
You can use SHOW INDEX command to list out all the indexes associated with a table. Vertical-format output (specified by \G) often is useful with this statement, to avoid long line wraparound:
Try out the following example:
mysql> SHOW INDEX FROM table_name\G
Optimize Your query by removing mysql function like date_sub , and do the same in php and pass it
DATE_SUB PHP VERSION
Related
How would I reset the primary key counter on a sql table and update each row with a new primary key?
I would add another column to the table first, populate that with the new PK.
Then I'd use update statements to update the new fk fields in all related tables.
Then you can drop the old PK and old fk fields.
EDIT: Yes, as Ian says you will have to drop and then recreate all foreign key constraints.
Not sure which DBMS you're using but if it happens to be SQL Server:
SET IDENTITY_INSERT [MyTable] ON
allows you to update/insert the primary key column. Then when you are done updating the keys (you could use a CURSOR for this if the logic is complicated)
SET IDENTITY_INSERT [MyTable] OFF
Hope that helps!
This may or not be MS SQL specific, but:
TRUNCATE TABLE resets the identity counter, so one way to do this quick and dirty would be to
1) Do a Backup
2) Copy table contents to temp table:
3) Copy temp table contents back to table (which has the identity column):
SELECT Field1, Field2 INTO #MyTable FROM MyTable
TRUNCATE TABLE MyTable
INSERT INTO MyTable
(Field1, Field2)
SELECT Field1, Field2 FROM #MyTable
SELECT * FROM MyTable
-----------------------------------
ID Field1 Field2
1 Value1 Value2
Why would you even bother? The whole point of counter-based "identity" primary keys is that the numbers are arbitrary and meaningless.
you could do it in the following steps:
create copy of yourTable with extra column new_key
populate copyOfYourTable with the affected rows from yourTable along with desired values of new_key
temporarily disable constraints
update all related tables to point to the value of new_key instead of the old_key
delete affected rows from yourTable
SET IDENTITY_INSERT [yourTable] ON
insert affected rows again with the new proper value of the key (from copy table)
SET IDENTITY_INSERT [yourTable] OFF
reseed identity
re-enable constraints
delete the copyOfYourtable
But as others said all that work is not needed.
I tend to look at the identity type primary keys as if they were equivalent of pointers in C, I use them to reference other objects but never modify of access them explicitly
If this is Microsoft's SQL Server, one thing you could do is use the [dbcc checkident](http://msdn.microsoft.com/en-us/library/ms176057(SQL.90).aspx)
Assume you have a single table that you want to move around data within along with renumbering the primary keys. For the example, the name of the table is ErrorCode. It has two fields, ErrorCodeID (which is the primary key) and a Description.
Example Code Using dbcc checkident
-- Reset the primary key counter
dbcc checkident(ErrorCode, reseed, 7000)
-- Move all rows greater than 8000 to the 7000 range
insert into ErrorCode
select Description from ErrorCode where ErrorCodeID >= 8000
-- Delete the old rows
delete ErrorCode where ErrorCodeID >= 8000
-- Reset the primary key counter
dbcc checkident(ErrorCode, reseed, 8000)
With this example, you'll effectively be moving all rows to a different primary key and then resetting so the next insert takes on an 8000 ID.
Hope this helps a bit!
I want to add complex unique key to existing table. Key contains from 4 fields (user_id, game_id, date, time).
But table have non unique rows.
I understand that I can remove all duplicate dates and after that add complex key.
Maybe exist another solution without searching all duplicate data. (like add unique ignore etc).
UPD
I searched, how can remove duplicate mysql rows - i think it's good solution.
Remove duplicates using only a MySQL query?
You can do as yAnTar advised
ALTER TABLE TABLE_NAME ADD Id INT AUTO_INCREMENT PRIMARY KEY
OR
You can add a constraint
ALTER TABLE TABLE_NAME ADD CONSTRAINT constr_ID UNIQUE (user_id, game_id, date, time)
But I think to not lose your existing data, you can add an indentity column and then make a composite key.
The proper syntax would be - ALTER TABLE Table_Name ADD UNIQUE (column_name)
Example
ALTER TABLE 0_value_addition_setup ADD UNIQUE (`value_code`)
I had to solve a similar problem. I inherited a large source table from MS Access with nearly 15000 records that did not have a primary key, which I had to normalize and make CakePHP compatible. One convention of CakePHP is that every table has a the primary key, that it is first column and that it is called 'id'. The following simple statement did the trick for me under MySQL 5.5:
ALTER TABLE `database_name`.`table_name`
ADD COLUMN `id` INT NOT NULL AUTO_INCREMENT FIRST,
ADD PRIMARY KEY (`id`);
This added a new column 'id' of type integer in front of the existing data ("FIRST" keyword). The AUTO_INCREMENT keyword increments the ids starting with 1. Now every dataset has a unique numerical id. (Without the AUTO_INCREMENT statement all rows are populated with id = 0).
Set Multiple Unique key into table
ALTER TABLE table_name
ADD CONSTRAINT UC_table_name UNIQUE (field1,field2);
I am providing my solution with the assumption on your business logic. Basically in my design I will allow the table to store only one record for a user-game combination. So I will add a composite key to the table.
PRIMARY KEY (`user_id`,`game_id`)
Either create an auto-increment id or a UNIQUE id and add it to the natural key you are talking about with the 4 fields. this will make every row in the table unique...
For MySQL:
ALTER TABLE MyTable ADD MyId INT AUTO_INCREMENT PRIMARY KEY;
If yourColumnName has some values doesn't unique, and now you wanna add an unique index for it. Try this:
CREATE UNIQUE INDEX [IDX_Name] ON yourTableName (yourColumnName) WHERE [id]>1963 --1963 is max(id)-1
Now, try to insert some values are exists for test.
I've two same tables(same table columns and primary key) in two different databases. I want to add 2nd table data to the first table that not exist in the first table (according to the primary key).
what is the best method to do that?
I can export 2nd table data as csv, php array or sql file.
Thanks
There are lots of ways to do this.
The simplest is probably this one:
INSERT IGNORE
INTO table_1
SELECT *
FROM table_2
;
which allows those rows in table_1 to supersede those in table_2 that
have a matching primary key, while still inserting rows with new
primary keys.
Alternatively, you can use a subquery to find out the rows that are not shared by both tables and insert them. If you've got a lot of records, you may want to consider using a temporary table to speed up the process.
I have 3 tables, images, icons, and banners, each with a unique primary key that is also auto_incremented named image_id, icon_id, and banner_id, respectively.
I'm looping through the above tables and I'm wondering if there's a way I can select the id column without specifying it's specific name.
Something like
SELECT PRIMARY_KEY
FROM {$table}
Where I don't have to change my table structure or use * as there would be much data to return and would slow down my application.
Just name the id columns id in each table. Reserve the whatever_id naming for foreign keys.
I'm not a LAMP guy, but it looks to me like you want the INFORMATION_SCHEMA tables.
A query something like :
SELECT pk.table_name, column_name as 'primary_key'
FROM information_schema.table_constraints pk
INNER JOIN information_schema.key_column_usage C
on c.table_name = pk.table_name and
c.constraint_name = pk.constraint_name
where constraint_type = 'primary key'
-- and pk.table_name LIKE '%whatever%'
This above query (filtered to whatever relevant set of tables you need) will give you bit a list of table names and associated Primary Keys. What that information on hand you could query something like :
SELECT {$PK_ColumnName}
FROM {$table}
Note, you might needs a more complicated syntax and string builder if you have composite primary keys (i.e. more than one field per key). Also, the information schema can be relatively expensive to query, so you'll either want to cache the result set up, or query it infrequently.
The PRIMARY key is different than the column that has the primary key on it. The primary key is both an index and a constraint that is placed on one or more columns, not a column itself. Your pseudocode query:
SELECT PRIMARY_KEY
FROM tablename
is equivalent to this:
SELECT keyname
FROM tablename
Which is invalid. What you really need to select is a column, not a key.
Unfortunately, there is no column alias or simple function that you can use to specify the columns that have the primary key constraint. It's most likely not available because the primary key can apply to more than one column.
To see which columns have the PRIMARY key constraint, you could use some reflection by querying the schema tables, using SHOW COLUMNS, etc.. Simply doing SELECT * FROM tablename LIMIT 1 would get you all the column names in the result, if you wanted to assume the first column had the primary key constraint.
Of course, you could just do SELECT * anyway, when you don't know the column name.
If you don't want to make an extra query to fetch the column name to construct the query, using built-in meta data, or your own, I'd heed Marc B's answer if you can.
Or you can use the standard SQL command
show columns from tablename
It will show the PRI column
Check the online documentation for more info
Say I have two tables. One, lets call it typeDB, has for rows consisting of an index, a type name, and a list of IDs for elements in the other table which are of this type. To get the rows from the second table - lets call it dataDB - of type 0, I could then basically do (in sloppy pseudocode):
$list = SELECT list FROM typeDB WHERE index=0
And then I could get the rows from dataDB using:
$array = explode($list)
for (every element of list $i)
$results = SELECT * FROM dataDB WHERE index=$array[$i]
So my question is... is this any faster than just having a type field in dataDB, and then doing:
$results = SELECT * FROM dataDB WHERE type=$type
My thought was that because the first method didn't have to go through the entire database, it would be faster. But I don't really know how the database queries work. Which way do you think would be the most efficient? Thanks.
Put an index on the type column and use your second version, it will be much faster.
Also note that I think you are bit confused by what a database is.. A database is a collection of tables (as well as triggers, stored procedures, views, etc) so naming tables with the name somethingDB is a bit confusing..
When I say index i'm referring to a database index (nothing to do with what looks like a column you had called index).
to create the column and index you use something like this (for mysql)
ALTER TABLE dataDB ADD COLUMN `type` varchar(64)
CREATE INDEX type_index ON dataDB(type)
similar for other DBMS's
As brought up in the comments, you then need to join on the type column.
You can either have a table that has types and an auto increment id and a unique constraint on the type/name field.. Then use the auto increment id as the foreign key, or just make a type table with one column (type) which is the primary key. Either way will work and both have benefits (I would go with an auto increment column as I believe it is more flexible to work with in code).
If you did go with an auto increment column you'd have this:
CREATE TABLE dataType (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(64) UNIQUE
)
ALTER TABLE dataDB ADD COLUMN `type` INT;
ALTER TABLE dataDB ADD CONSTRAINT fk_type FOREIGN KEY (type) REFERENCES dataType(id);
then when you go to query dataDB if you want the type names (as opposed to the integers) you would do a join like this:
SELECT dataDB.list, dataType.name FROM dataDB
INNER JOIN dataType ON dataDB.type=dataType.id
where dataDB.type="$type"
This assumes types are some kind of name and not integers to begin with though, if they were integers all along just make the int value the only column of the dataType table and thus it would be your primary key.