MySQL database design - php

I'm setting up a MySQL database and I'm not sure of the best method to structure it:
I am setting up a system (PHP/MySQL based) where a few hundred people will be executing SELECT/UPDATE/SET/DELETE queries to a database (probably about 50 simultaneously). I imagine there are going to be a few thousand rows if they're all using the same database and table. I could split the data across a number of tables but then I would have to make sure they're all uniform AND I, as the administrator, will be running some SELECT DISTINCT queries via cron to update an administrative interface.
What's the best way to approach this? Can I have everybody sharing one database? one table? Will there be a problem when there are a few thousand rows? I imagine there is going to be a huge performance issue over time.
Any tips or suggestions are welcome!

MySQL/php can easily handle this as long as your server is powerful enough. MySQL loves RAM and will use as much as it can (within the limits you provide).
If you're going to have a lot of concurrent users then I would suggest looking at using innodb tables instead of MyISAM (the default in MySQL versions <5.5). Innodb locks individual rows when doing INSERT/UPDATE/DELETE etc, rather than locking the whole table like MyISAM does.
We use php/MySQL and would have 1000+ users on our site at the same time (our master db server does about 4k queries per second).

Related

Optimization: large MySQL table, only recent records used

I have an optimization question.
The PHP web application, that I have recently started working with, has several large database tables in a MySQL database. The information in this tables should be accessible at all times for business purposes, which makes them grow really big eventually.
The tables are regularly written to and recent records are frequently selected.
Previous developers came up with a very weird practice of optimizing the system. They created separate database for storing recent records in order to keep tables compact and sync the tables once the record grows "old" (more than 24 hours old).
The application uses current date to pick the right database, when performing a SELECT query.
This is a very weird solution in my opinion. We had a big argument over that and I am looking to change this. However, before, I decided to ask:
1) Has someone ever came across anything similar before? I mean, separate database for recent records.
2) What are the most common practices to optimize databases for this particular case?
Any opinions are welcome, as there are many ways one can go at this point.
Try using INDEX:
CREATE INDEX
That improve the access, use and deploy of the information.
I believe this could help you RANGE Partitioning
The solution is to do a Partion to the table base on a date range.
By splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Maintenance tasks, such as rebuilding indexes or backing up a table, can run more quickly.
The documentation of Mysql can be useful, check this out :
https://dev.mysql.com/doc/refman/5.5/en/partitioning-columns-range.html

MYSQL PHP Application performance with single database

I am designing an "high" traffic application which realies mainly on PHP and MySQL database queries.
I am designing the database tables so they can hold 100'000 rows, each page loading queries the db for user data.
I can experience slow performances or database errors when there are say 1000 users connected ?
Asking because i cannot find specification on the real performance limits of mysql databases.
Thanks
If the userdata remains unchanged due loading another page, you could think about storing those information in a session.
Also, you should analyze how the read/write ratio in your database/ on specific tables is. MyIsam and InnoDB are very different when it comes to locking. Many connections can slow down your server, try to cache connections.
Take a look at http://php.net/manual/en/pdo.connections.php
if designed wrongly, one user might kill your server. you need to have performance tests, find bottle necks profiling your code. use explain for your queries...
Well designed databases can handle with tens of millions of rows, but poor designed can't.
Don't worry about performance, try to design it well.
It's just hard to say a design was good or not,you should always do some stress tests before you set up your application or website to help you see the performance,tools i often used were mysqlslap(for mysql only) and apache's ab command.you can google them for details.

Any advantages to using separate databases on single server for a reports module?

I have a report class that saves data like summary page views, banner impressions and so on. These are mostly INSERT and UPDATE queries. This takes a pounding.
Would it be a good idea to move the tables to a separate database and use a separate connection? Wondering if there would be any major advantages in relation to; performance, scalability etc. Thanks
Yes, but only if you are trying to manage load. If there is a lot of inserts and updates going on, that could cause locking issues if your tables are MyISAM. If you are doing replication, then heavy logging can cause slaves to fall behind because mysql replication is serial. You can do 10 inserts simultaneously on 10 different tables, but replication causes them to run one after another. That can cause more important inserts to take longer to replicate.
Having a separate server for logging will help performance and scalability. Keep in mind though that you can't join across servers.

Using PHP to optimize MySQL query

Let's assume I have the following query:
SELECT address
FROM addresses a, names n
WHERE a.address_id = n.address_id
GROUP BY n.address_id
HAVING COUNT(*) >= 10
If the two tables were large enough (think if we had the whole US population in these two tables) then running an EXPLAIN on this SELECT would say that Using temporary; Using filesort which is usually not good.
If we have a DB with many concurrent INSERTs and SELECTs (like this) would delegating the GROUP BY a.address_id HAVING COUNT(*) >= 10 part to PHP be a good plan to minimise DB resources? What would the most efficient way (in terms of computing power) to code this?
EDIT: It seems the consensus is that offloading to PHP is the wrong move. How then, could I improve the query (let's assume indexes have been created properly)? More sepcifically how do I avoid the DB from creating a temporary table?
So your plan to minimize resources is by sucking all the data out of the database and having PHP process it, causing extreme memory usage?
Don't do client-side processing if at all possible - databases are DESIGNED for this sort of heavy work.
Offloading this to PHP is probably the opposite direction you want to go. If you must do this on a single machine then the database is likely the most efficient place to do it. If you have a bunch of PHP machines and only a single DB server, then offloading might make sense, but more likely you'll just clobber the IO capability of the DB. You'll probably get a bigger win by setting up a replica and doing your read queries there. Depending on your ratio of SELECT to INSERT queries, you might want to consider keeping a tally table (many more SELECTs than INSERTs). The more latency you can allow for your results, the more options you have. If you can allow 5 minutes latency, then you might start considering a distributed batch processing system like hadoop rather than a database.

Select over multiple databases

In a project at work i have to improve performance. The data of the app is spread over many databases. I was told that is better for organizing the data. Whatever. Is there a performance penalty when i do a select over some tables spread on several databases instead of a select on those tables in one database?
depends on whether or not those databases are on the same physical server.
No, there shouldn't be a signifigant performance increase from spreading queries across different databases in mysql, assuming that the database are part of the same mysql install.
You'll do better to start with reducing the number of queries per page request, and zeroing in on individual queries that are taking a long time to complete.

Categories