I want to create a view of current information as such:
CREATE VIEW activeProducts AS SELECT * FROM products WHERE isActive = 1
The above statement creates a snapshot of the query:
SELECT * FROM products WHERE isActive = 1
So if I change the state of any item in the table 'products' after this query is run, it isn't reflected in the view. I understand that this is the function of CREATE VIEW in mysql, is there a switch or command that I should use for viewing or filtering current information?
The thing you're looking for is called a materialized view.
MySQL does not natively support materialized views.
There are a few workarounds, but the most simple and straightforward (if poor performing) is using CREATE TABLE ... AS SELECT ... instead of creating a view. This duplicates data, and may be quite slow if there's a lot of it. Further, I don't believe it creates any indexes, so you might need to clean up a bit afterward.
What you want is a "materialized view" which is also known as a snapshot.
Charles presented the easiest way to take a snapshot, but as he suggested it won't have indexes. If you want the snapshot to include the same indexes you can use:
create table if not exists snapshot_table like regular_table;
begin;
delete from snapshot_table;
insert into snapshot_table select * from regular_table where isActive = 1;
commit;
That being said, if the table is large, or you want to use joins or aggregation, then updating the view can be very expensive.
If you want to investigate "incrementally refreshable materialized views", which can be refreshed fast, then check out this blog post.
Related
What is the proper way to delete rows from several tables in one query?
The reason I ask is because I am doing this with PHP. If I use multiple queries to delete from each table one at a time, PHP has to make multiple trips to the database. Will there be any effect on performance if I used this method?
I am aware of the ON DELETE CASCADE option, but this does not work on every storage engine. Also, there may be situations where I do not want to remove all of the records from the child tables when I delete the parent record.
DELETE
t1, t2
FROM
table1 AS t1 INNER JOIN table2 AS t2
ON
joinCondition
WHERE
whereCondition
As usual with DELETE queries: be very careful
More details here: http://dev.mysql.com/doc/refman/5.5/en/delete.html
If you don't know the answer to this question, then you shouldn't be trying to support numerous RDMS's for your application. To put bluntly. The CASCADE option is available in like every relational db that matters. Also, you should consider looking at how to store hierarchical data, to delete child records.
For example, if you were trying to delete all "files" in a "folder" when using Nested Set Model, it would simply be a matter of
DELETE from files where id > :lft and id < :rgt
But, in any case, you can still delete from multiple tables, by using JOIN deletes. However, this is not support by a lot of RDMS, so if you are worried about using cascade, then you are never going to be able to use join deletes accross every database, even if you use a DBAL.
The Answer
Use a DBAL, such as Doctrine DBAL (not the ORM), and use Cascades where supported.
Pick a single database, and develop with what you know on that.
I was also once faced with a similar problem. My solution was to write my own min-recusive query. Just one query and it does the rest for you. Here is how it goes:
//main function
function factory($db){ $table=array('table1', 'table2', 'table3'...);
//mine went all the way to table 12 for($i=0;
$i<''sizeof($table); $i++){ delete($db, $table[$i]); } }
//delete function.. could as well setit up withn the factory function but 4 undestanging sake
delete($db, $table){
$sql='delete from '.$table; if($db->query($sql)){ echo
ucfirst($table).' delete completed 100%';
}
}
Thats all... to make work for non-predefined insert array, crete a varible to hold the array size from your data entry page and change the delete to 'insert into table' prepare statement and execute.
Hope it has helped you in some way
Im sketching out a database layout for a website that has the potential to become huge with 100's of queries a minute.
I was thinking about doing the following:
user table
id
name
(few more fields)
Pages (this one will become the biggest table)
id
titel
img
text
restaurant (this will be the row that connects the pages to the user table, i was planning on creating an index on this one to increase speed)
So im wondering if creating an index for the 'restaurant' row will increase the speed of my queries or if there is any other way to speed up things?
Thanks in advance!
If you need to do some query like :
select *
from pages
where restaurant = ...
Or like :
select *
from user
inner join pages on pages.restaurant = user.id
where user.name = '...'
Or any other condition on the restaurant column, then, you'll probably want to add an index on that column, to avoid scanning all lines on the pages table.
But note that useful/necessary indexes will almost always depend on the kind of queries you'll be doing.
Which means that it's not quite possible to accurately guess which indexes you'll need -- first, you need to know how you will access you data.
Note : you should read the How MySQL Uses Indexes section of MySQL's manual : it contains stuff that's interesting to know ;-)
As a test, you can always run your query in your preferred tool and add EXPLAIN in front. This will show you what indices are being used and/or which temporary tables had to be created etc.
EXPLAIN select *
from pages
where restaurant = ...
If you're using the InnoDB storage, you should not just use 'an index' but make use of FOREIGN KEY. Thus, you will also decrease potential integrity problems.
Suggestion: do not use restaurant as a name. Add some more tables and it will be difficult to keep track what references what. Why not call it user_id? (This is a matter of personal preference, though.)
I have two tables called clients, they are exactly the same but within two different db's. Now the master always needs to update with the secondary one. And all data should always be the same, the script runs once per day. What would be the best to accomplish this.
I had the following solution but I think maybe theres a better way to do this
$sql = "SELECT * FROM client";
$res = mysql_query($conn,$sql);
while($row = mysql_fetch_object($res)){
$sql = "SELECT count(*) FROM clients WHERE id={$row->id}";
$res1 = mysql_query($connSecond,$sql);
if(mysql_num_rows($res1) > 0){
//Update second table
}else{
//Insert into second table
}
}
and then I need a solution to delete all old data in second table thats not in master.
Any advise help would be appreaciated
This is by no means an answer to your php code, but you should take a look # Mysql Triggers, you should be able to create triggers (on updates / inserts / deletes) and have a trigger (like a stored proceedure) update your table.
Going off the description you give, I would create a trigger that would check for changes to the 2ndary table, then write that change to the primary table, and delete that initial entry (if so required) form the 2ndary table.
Triggers are run per conditions that you define.
Hopefully this gives you insight into 'another' way of doing this task.
More references on triggers for mysql:
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
http://www.mysqltutorial.org/create-the-first-trigger-in-mysql.aspx
You can use mysql INSERT ... SELECT like this (but first truncate the target table):
TRUNCATE TABLE database2.client;
INSERT INTO database2.client SELECT * FROM database1.client;
It will be way faster than doing it by PHP.
And to your notice:
As long as the mysql user has been given the right permissions to all databases and tables where data is pulled from or pushed to, this will work. Though the mysql_select_db function selects one database, the mysql statement may reference another if you use complete reference like databasename.tablename
Not exactly answering your question, but how about just using 1 table, instead of 2? You could use a fedarated table to access the other (if it's on a different mysql instance) or reference the table directly (like shamittomar's suggestion)
If both are on the same MySQL instance, you could easily use a view:
CREATE VIEW database2.client SELECT * FROM database1.client;
And that's it! No synchronizing, no cron jobs, no voodoo :)
I know i am writing query's wrong and when we get a lot of traffic, our database gets hit HARD and the page slows to a grind...
I think I need to write queries based on CREATE VIEW from the last 30 days from the CURDATE ?? But not sure where to begin or if this will be MORE efficient query for the database?
Anyways, here is a sample query I have written..
$query_Recordset6 = "SELECT `date`, title, category, url, comments
FROM cute_news
WHERE category LIKE '%45%'
ORDER BY `date` DESC";
Any help or suggestions would be great! I have about 11 queries like this, but I am confident if I could get help on one of these, then I can implement them to the rest!!
Putting a wildcard on the left side of a value comparison:
LIKE '%xyz'
...means that an index can not be used, even if one exists. Might want to consider using Full Text Searching (FTS), which means adding full text indexing.
Normalizing the data would be another step to consider - categories should likely be in a separate table.
SELECT `date`, title, category, url, comments
FROM cute_news
WHERE category LIKE '%45%'
ORDER BY `date` DESC
The LIKE '%45%' means a full table scan will need to be performed. Are you perhaps storing a list of categories in the column? If so creating a new table storing category and news_article_id will allow an index to be used to retrieve the matching records much more efficiently.
OK, time for psychic debugging.
In my mind's eye, I see that query performance would be improved considerably through database normalization, specifically by splitting the category multi-valued column into a a separate table that has two columns: the primary key for cute_news and the category ID.
This would also allow you to directly link said table to the categories table without having to parse it first.
Or, as Chris Date said: "Every row-and-column intersection contains exactly one value from the applicable domain (and nothing else)."
Anything with LIKE '%XXX%' is going to be slow. Its a slow operation.
For something like categories, you might want to separate categories out into another table and use a foreign key in the cute_news table. That way you can have category_id, and use that in the query which will be MUCH faster.
Also, I'm not quite sure why you're talking about using CREATE VIEW. Views will not really help you for speed. Not unless its a materialized view, which MySQL doesn't suppose natively.
If your database is getting hit hard, the solution isn't to make a view (the view is still basically the same amount of work for the database to do), the solution is to cache the results.
This is especially applicable since, from what it sounds like, your data only needs to be refreshed once every 30 days.
I'd guess that your category column is a list of category values like "12,34,45,78" ?
This is not good relational database design. One reason it's not good is as you've discovered: it's incredibly slow to search for a substring that might appear in the middle of that list.
Some people have suggested using fulltext search instead of the LIKE predicate with wildcards, but in this case it's simpler to create another table so you can list one category value per row, with a reference back to your cute_news table:
CREATE TABLE cute_news_category (
news_id INT NOT NULL,
category INT NOT NULL,
PRIMARY KEY (news_id, category),
FOREIGN KEY (news_id) REFERENCES cute_news(news_id)
) ENGINE=InnoDB;
Then you can query and it'll go a lot faster:
SELECT n.`date`, n.title, c.category, n.url, n.comments
FROM cute_news n
JOIN cute_news_category c ON (n.news_id = c.news_id)
WHERE c.category = 45
ORDER BY n.`date` DESC
Any answer is a guess, show:
- the relevant SHOW CREATE TABLE outputs
- the EXPLAIN output from your common queries.
And Bill Karwin's comment certainly applies.
After all this & optimizing, sampling the data into a table with only the last 30 days could still be desired, in which case you're better of running a daily cronjob to do just that.
The situation is as follows: I've got 2 models: 'Action' and 'User'. These models refer to the tables 'actions' and 'users', respectively.
My action table contains a column user_id. At this moment, I need an overview of all actions, and the users to which they are assigned to. When i use $action->fetchAll(), I only have the user ID, so I want to be able to join the data from the user model, preferably without making a call to findDependentRowset().
I thought about creating custom fetchAll(), fetchRow() and find() methods in my model, but this would break default behaviour.
What is the best way to solve this issue? Any help would be greatly appreciated.
I designed and implemented the table-relationships feature in Zend Framework.
My first comment is that you wouldn't use findDependentRowset() anyway -- you'd use findParentRow() if the Action has a foreign key reference to User.
$actionTable = new Action();
$actionRowset = $actionTable->fetchAll();
foreach ($actionRowset as $actionRow) {
$userRow = $actionRow->findParentRow('User');
}
Edit: In the loop, you now have an $actionRow and a $userRow object. You can write changes back to the database through either object by changing object fields and calling save() on the object.
You can also use the Zend_Db_Table_Select class (which was implemented after I left the project) to retrieve a Rowset based on a join between Action and User.
$actionTable = new Action();
$actionQuery = $actionTable->select()
->setIntegrityCheck(false) // allows joins
->from($actionTable)
->join('user', 'user.id = action.user_id');
$joinedRowset = $actionTable->fetchAll($actionQuery);
foreach ($joinedRowset as $joinedRow) {
print_r($joinedRow->toArray());
}
Note that such a Rowset based on a join query is read-only. You cannot set field values in the Row objects and call save() to post changes back to the database.
Edit: There is no way to make an arbitrary joined result set writable. Consider a simple example based on the joined result set above:
action_id action_type user_id user_name
1 Buy 1 Bill
2 Sell 1 Bill
3 Buy 2 Aron
4 Sell 2 Aron
Next for the row with action_id=1, I change one of the fields that came from the User object:
$joinedRow->user_name = 'William';
$joinedRow->save();
Questions: when I view the next row with action_id=2, should I see 'Bill' or 'William'? If 'William', does this mean that saving row 1 has to automatically update 'Bill' to 'William' in all other rows in this result set? Or does it mean that save() automatically re-runs the SQL query to get a refreshed result set from the database? What if the query is time-consuming?
Also consider the object-oriented design. Each Row is a separate object. Is it appropriate that calling save() on one object has the side effect of changing values in a separate object (even if they are part of the same collection of objects)? That seems like a form of Content Coupling to me.
The example above is a relatively simple query, but much more complex queries are also permitted. Zend_Db cannot analyze queries with the intention to tell writable results from read-only results. That's also why MySQL views are not updateable.
You could always make a view in your database that does the join for you.
CREATE OR REPLACE VIEW VwAction AS
SELECT [columns]
FROM action
LEFT JOIN user
ON user.id = action.user_id
Then just use
$vwAction->fetchAll();
Just remember that views in MySQL are read-only (assuming this is MySQL)
isn't creating a view sql table a good solution to make joint ?
and after a simple table class to access it
I would think it's better if your logic is in sql than in php