We have a database full of MyISAM tables. The thing is that we need transactions on just 1 table, called payments My question is, will that work? I mean by changing the engine of that table to InnoDB and using transactions on php, will that do the work? or do i have to mess around and do more than just that? Will that affect my db in anyway? The table is isolated, it doesn't have foreign keys since MyISAM doesn't support them.
Ty in advance.
As documented under Storage Engines:
It is important to remember that you are not restricted to using the same storage engine for an entire server or schema: you can use a different storage engine for each table in your schema.
So yes, your proposal will work (provided that you only wish to attain ACID compliance on the payments table, of course).
Related
i cant use the below code to sort my database permanently:-
ALTER TABLE myTable ORDER BY column DESC;
anyone can help? Thank you in advanced!
It sounds like you are trying to create an index oriented table (from the SQL Server world this would be a table clustered on an index, and in MySQL, it would be the primary key on an InnoDB table).
SQLite does not support such a feature. You cannot permanently set a logical access order to the table. What you can do is set various secondary indexes which are themselves ordered to provide that sort of ordered access to the data.
However, keep in mind that a logical order index scan over the whole table is usually slower than scanning the whole table and sorting, so it may or may not solve any performance problems.
I have searched over net a lot. What I could understand is that this thing has been faced by many people before me and it has also been filed as mysql bugs. But I couldn't find any solution to this. The problem is just that I can't get this command working-
alter table areas order by area_name;
I get this warning-
ORDER BY ignored as there is a user-defined clustered index in the table 'areas'
I just want to sort the table on the basis of 'area_name', that is, names of areas. Just to add, I am trying to do this in the database of my laravel app.
If the db engine is InnoDB, then you can't do this.
From the doc:
ORDER BY does not make sense for InnoDB tables because InnoDB always
orders table rows according to the clustered index.
I have a table for posts as ID, Title, Content, etc. I just added a column as counter. It is a simple counter of visits, as every time it will be updated as $i+1. In this method, I update the table on every visit, just after reading the row (in a single mysql session).
Is it a good idea to separate the counter as another table connected with the post table with ID as foreign key? In this method, updating a lighter table is faster, but every time I need to read two tables (showing the post and its statistics).
So, my answer is it depends on what you are trying to do. I'm going to give you some behind the scene info on MySQL so you can decide when it makes sense to and not to create a counter table.
MySQL has different table engines. When you create a new table, you specify which engine to use, the more common ones are MyISAM and Innodb. MyISAM is really good and fast at doing selects and Innodb is really fast at doing inserts. There are other differences between both but it's good to understand that the engine you select is important.
Based on the above, if you have a table that is usually read and has a ton of rows, it might make sense to keep that table as MyISAM and create a separate counter table using Innodb that that keeps getting updated. If you are implementing cache, this is another reason which this model would work better since you won't clear your cache for your table data every time you update the counter since you would be updating the counter on a different table.
Now, some may argue that you should using Innodb because it has many more benefits but there are replication strategies that would allow you to make the best of both worlds.
I hope this gives you a general understanding so you can then dig deeper and find your answer. More info at: http://www.mysqlperformanceblog.com/2007/07/01/implementing-efficient-counters-with-mysql/
Does anybody has experience of using partitioning feature in conjunction with the Doctrine2 library?
The first problem is that Doctrine creates foreign keys for association columns, anybody knows how to prevent or disable that?
And the second problem is how to specify custom table definition (PARTITION BY ...)?
Thanks in advance!
You're not out of luck!!
First, drop all foreign keys from all the tables D2 is managing.
Copy & execute the result of this query:
SET SESSION group_concat_max_len=8192; -- // increase this if you do not see the full list of your tables
SELECT IFNULL(REPLACE(GROUP_CONCAT('ALTER TABLE ',TABLE_NAME,' DROP FOREIGN KEY ',CONSTRAINT_NAME,'; '), ',', ''), '') FROM information_schema.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY';
Then override the supportsForeignKeyConstraints() method in /vendor/doctrine-dbal/lib/Doctrine/DBAL/Platforms/MySqlPlatform.php (or wherever this class is located) to:
public function supportsForeignKeyConstraints()
{
return false;
}
This will stop Doctrine from creating foreign key constraints on your next doctrine:schema:update command. After that you can simply execute an ALTER TABLE PARTITION BY... statement where needed (D2 doesn't support partitioning on a schema level). I recommend you backup & truncate your tables first (using --no-create-info) in order to have the structure changes executed as fast as possible and then restore them.
As this fellow says here, and based on my personal experience, D2 doesn't care whether you have FKs or not, as long as the proper relation definitions are in place.
P.S.: I'm currently working on extending the annotation syntax to support proper table & column definitions, including ENGINE (this might be useful), PARTITION BY & the #Column options array (i.e. {"fixed"=true, "unsigned"=true, "default"=0})
The overall effort amounts to a couple of sleepless nights for reverse-engineering & code patches, hope you do it faster :)
PARTITION engine in MySQL has major limitations with regard to keys. Please see latest docs, currently here: http://dev.mysql.com/doc/refman/5.1/en/partitioning-limitations-partitioning-keys-unique-keys.html
If Doctrine requires keys that Partition does not support, you are out of luck. Partition engine is very limited by design - it's intended for archival storage which is infrequently read. Few MySQL-aware apps will work with Partition, unless you make changes.
I would suggest using Partition as it was intended - archiving. Store your data in a more mainstream MySQL data engine would be the answer.
What are your methods of linking data spread over multiple databases architectures (think MySQL vs PostgreSQL etc), into a single application?
Would you create giant hashtables/arrays to match content against one another? Are there other, more effective and less memory-consuming options for doing this?
If you were to use data both from a MySQL & PostgreSQL source, with no way of converting one DB to the other (application constraints, lack of time, lack of knowledge, ... ), how would you go about it?
SQL Relay or another sql proxy.
http://sqlrelay.sourceforge.net/
At least in the case of MySQL, you can use data from multiple databases in a single query anyway, provided the databases are hosted by the same MySQL Server instance. You can distinguish tables from different databases by qualifying the table with a schema name:
CREATE TABLE test.foo (id SERIAL PRIMARY KEY) TYPE=InnoDB;
CREATE DATABASE test2;
CREATE TABLE test2.bar (foo_id BIGINT UNSIGNED,
FOREIGN KEY (foo_id) REFERENCES test.foo(id)) TYPE=InnoDB;
SELECT * FROM test.foo f JOIN test2.bar b ON (f.id = b.foo_id);
In PostgreSQL, you can also qualify table references with a schema name. I'm not sure if you can create foreign key constraints across databases, though.
If you're looking to create constraints across RDBMSes - you can't.
I'm facing the same issue with running part of an application off PostgreSQL for where it will benefit, and the rest of MySQL where it's better.
I'm doing multiple inserts keyed off the same format of primary information (in my case a generic user ID), so I'm letting the application handle the logic of making sure to ask for the same ID from both DBs.
There's not really a clean way to do this outside of abstracting it to a class or utility function, though, that I've found.