In a project at work i have to improve performance. The data of the app is spread over many databases. I was told that is better for organizing the data. Whatever. Is there a performance penalty when i do a select over some tables spread on several databases instead of a select on those tables in one database?
depends on whether or not those databases are on the same physical server.
No, there shouldn't be a signifigant performance increase from spreading queries across different databases in mysql, assuming that the database are part of the same mysql install.
You'll do better to start with reducing the number of queries per page request, and zeroing in on individual queries that are taking a long time to complete.
Related
I have several huge MYSQL tables (10gb+). Will the performance of PHP queries I am running on one of these tables be influenced by the presence of other huge tables in the same database? I am not running any queries on these other tables; they are neither directly nor indirectly linked to or referenced by my PHP query?
It deppends. You have to take in account the limited resources: memory and disc.
Are any other program or proccess accessing those tables?
Also you have to considered how your tables are fiscal stored on the database. Are they on the same Filegroup? Another problem is fragmentation: You can think that diferent tables are fisicaly split but it depends on how your database grow. You can have scenarios where two different tables that grow toghether on time, have their data mixed fisically on the disk and have impact on the performance. Hope my answer helps you.
I am designing an "high" traffic application which realies mainly on PHP and MySQL database queries.
I am designing the database tables so they can hold 100'000 rows, each page loading queries the db for user data.
I can experience slow performances or database errors when there are say 1000 users connected ?
Asking because i cannot find specification on the real performance limits of mysql databases.
Thanks
If the userdata remains unchanged due loading another page, you could think about storing those information in a session.
Also, you should analyze how the read/write ratio in your database/ on specific tables is. MyIsam and InnoDB are very different when it comes to locking. Many connections can slow down your server, try to cache connections.
Take a look at http://php.net/manual/en/pdo.connections.php
if designed wrongly, one user might kill your server. you need to have performance tests, find bottle necks profiling your code. use explain for your queries...
Well designed databases can handle with tens of millions of rows, but poor designed can't.
Don't worry about performance, try to design it well.
It's just hard to say a design was good or not,you should always do some stress tests before you set up your application or website to help you see the performance,tools i often used were mysqlslap(for mysql only) and apache's ab command.you can google them for details.
I'm setting up a MySQL database and I'm not sure of the best method to structure it:
I am setting up a system (PHP/MySQL based) where a few hundred people will be executing SELECT/UPDATE/SET/DELETE queries to a database (probably about 50 simultaneously). I imagine there are going to be a few thousand rows if they're all using the same database and table. I could split the data across a number of tables but then I would have to make sure they're all uniform AND I, as the administrator, will be running some SELECT DISTINCT queries via cron to update an administrative interface.
What's the best way to approach this? Can I have everybody sharing one database? one table? Will there be a problem when there are a few thousand rows? I imagine there is going to be a huge performance issue over time.
Any tips or suggestions are welcome!
MySQL/php can easily handle this as long as your server is powerful enough. MySQL loves RAM and will use as much as it can (within the limits you provide).
If you're going to have a lot of concurrent users then I would suggest looking at using innodb tables instead of MyISAM (the default in MySQL versions <5.5). Innodb locks individual rows when doing INSERT/UPDATE/DELETE etc, rather than locking the whole table like MyISAM does.
We use php/MySQL and would have 1000+ users on our site at the same time (our master db server does about 4k queries per second).
I have a report class that saves data like summary page views, banner impressions and so on. These are mostly INSERT and UPDATE queries. This takes a pounding.
Would it be a good idea to move the tables to a separate database and use a separate connection? Wondering if there would be any major advantages in relation to; performance, scalability etc. Thanks
Yes, but only if you are trying to manage load. If there is a lot of inserts and updates going on, that could cause locking issues if your tables are MyISAM. If you are doing replication, then heavy logging can cause slaves to fall behind because mysql replication is serial. You can do 10 inserts simultaneously on 10 different tables, but replication causes them to run one after another. That can cause more important inserts to take longer to replicate.
Having a separate server for logging will help performance and scalability. Keep in mind though that you can't join across servers.
Let's assume I have the following query:
SELECT address
FROM addresses a, names n
WHERE a.address_id = n.address_id
GROUP BY n.address_id
HAVING COUNT(*) >= 10
If the two tables were large enough (think if we had the whole US population in these two tables) then running an EXPLAIN on this SELECT would say that Using temporary; Using filesort which is usually not good.
If we have a DB with many concurrent INSERTs and SELECTs (like this) would delegating the GROUP BY a.address_id HAVING COUNT(*) >= 10 part to PHP be a good plan to minimise DB resources? What would the most efficient way (in terms of computing power) to code this?
EDIT: It seems the consensus is that offloading to PHP is the wrong move. How then, could I improve the query (let's assume indexes have been created properly)? More sepcifically how do I avoid the DB from creating a temporary table?
So your plan to minimize resources is by sucking all the data out of the database and having PHP process it, causing extreme memory usage?
Don't do client-side processing if at all possible - databases are DESIGNED for this sort of heavy work.
Offloading this to PHP is probably the opposite direction you want to go. If you must do this on a single machine then the database is likely the most efficient place to do it. If you have a bunch of PHP machines and only a single DB server, then offloading might make sense, but more likely you'll just clobber the IO capability of the DB. You'll probably get a bigger win by setting up a replica and doing your read queries there. Depending on your ratio of SELECT to INSERT queries, you might want to consider keeping a tally table (many more SELECTs than INSERTs). The more latency you can allow for your results, the more options you have. If you can allow 5 minutes latency, then you might start considering a distributed batch processing system like hadoop rather than a database.