MYSQL PHP Application performance with single database - php

I am designing an "high" traffic application which realies mainly on PHP and MySQL database queries.
I am designing the database tables so they can hold 100'000 rows, each page loading queries the db for user data.
I can experience slow performances or database errors when there are say 1000 users connected ?
Asking because i cannot find specification on the real performance limits of mysql databases.
Thanks

If the userdata remains unchanged due loading another page, you could think about storing those information in a session.
Also, you should analyze how the read/write ratio in your database/ on specific tables is. MyIsam and InnoDB are very different when it comes to locking. Many connections can slow down your server, try to cache connections.
Take a look at http://php.net/manual/en/pdo.connections.php

if designed wrongly, one user might kill your server. you need to have performance tests, find bottle necks profiling your code. use explain for your queries...

Well designed databases can handle with tens of millions of rows, but poor designed can't.
Don't worry about performance, try to design it well.

It's just hard to say a design was good or not,you should always do some stress tests before you set up your application or website to help you see the performance,tools i often used were mysqlslap(for mysql only) and apache's ab command.you can google them for details.

Related

Single DB or multiple DB (for multiple users in a single aplication)

I´m new on php/mysql, and i´m codding a simple CMS. But in this case i will host multiple companies (each company with their multiple users), that pays a fee to use the system.
So... My question is about how to organize the Data Base... Talking about security, management and performance, i just want to know the opinion of ou guys of wich of these cases is the best:
Host all companies on a single DB and they get a company id to match with the users.
Each company have a separated DB that holds the users in there (and dont need the companies id anymore).
I would start the development following the first situation... But than i thought if i have some hacker attack / sql injection, every client would be harmed. Having separated DBs, the damage will get only one client. So maybe the 2nd situation could be better in terms of security. But could not say the same about management and performance.
So, based on your experience, any help or tip would be great!
Thanks in advance, and sorry about my poor english.
I would go for seperate DBs. But not only for hacking.
Scalability:
Lets say you have a server that handles 10 websites, but 1 of those websites in growing fast in requests, content, etc. Your server is having a hard time to host all of them.
With seperate DB's it is a piece of cake to spread over multiple servers. With a single one you would have to upgrade you current DB or cluster it, but that is sometimes not possible with the hosting company or very expensive.
Performance:
You they are all on 1 DB and data of multiple users is in 1 table, locks might slow down other users.
Large tables, mean large indices, large lookups, etc. So splitting to diffrent DB's would actualy speed that up.
You would have to deal with extra memory and CPU overhead per DB but they normaly do not have an amazingly large impact.
And yes, management for multiple DBs is more work, but having proper update scripts and keeping a good eye on the versions of the DB schema will reduce your management concerns a lot.
Update: also see this article.
http://msdn.microsoft.com/en-us/library/aa479086.aspx
Separate DBs has many advantages including performance, security, scalability, mobility, etc. There is more risk less reward trying to pack everything into 1 database especially when you are talking about separate companies data.
You haven't provided any details, but generally speaking, I would opt for separate databases.
Using an autonomous database for every client allows a finer degree of control, as it would be possible to manage/backup/trash/etc. them individually, without affecting the others. It would also require less grooming, as data is easier to be distinguished, and one database cannot break the others.
Not to mention it would make the development process easier -- note that separate databases mean that you don't have to always verify the "owner" of the rows.
If you plan to have this database hosted in a cloud environment such as Azure databases where resources are (relatively) cheap, clients are running the same code base, the database schema is the same (obviously), and there is the possibility of sharing some data between the companies then a multi-tenant database may be the way to go. For anything else you, you will probably be creating a lot of extra work going with a multi-tenant database.
Keep in mind that if you go the separate databases route, trying to migrate to a multi-tenant cloud solution later on is a HUGE task. I only mention this because all I've been hearing for the past few years around the IT water coolers is "Cloud! Cloud! Cloud!".

Any advantages to using separate databases on single server for a reports module?

I have a report class that saves data like summary page views, banner impressions and so on. These are mostly INSERT and UPDATE queries. This takes a pounding.
Would it be a good idea to move the tables to a separate database and use a separate connection? Wondering if there would be any major advantages in relation to; performance, scalability etc. Thanks
Yes, but only if you are trying to manage load. If there is a lot of inserts and updates going on, that could cause locking issues if your tables are MyISAM. If you are doing replication, then heavy logging can cause slaves to fall behind because mysql replication is serial. You can do 10 inserts simultaneously on 10 different tables, but replication causes them to run one after another. That can cause more important inserts to take longer to replicate.
Having a separate server for logging will help performance and scalability. Keep in mind though that you can't join across servers.

caching for huge database

i am working on a project which is a kind of social network ... studies said that we'll have the first couple of months more than 100,000 users.
the website is done using php mysql and i am searching for the fastest caching engine, since we are talking about caching user data after he sign in.
so we are talking about a huge database, huge records in same table, huge number of users and requests, huge size of caching.
please note that as first step, the website should be hosted on shared server before it will gonna be moved to dedicated server ( it's the client decision, not our)
any tip, hint or suggestion is appreciated.
thanks
1) Put a lot of thought into a sensible database schema, since changing it later will be painful. MySQL tables are good for doing fast SELECT operations, which sounds appropriate for your app.
2) Don't optimize your code prematurely, ie don't worry about a caching solution yet, instead focus on writing modular code so you can easily improve bottlenecks with caching later.
3) After 1 and 2, you need to think about caching based on what will retrieved and how often. I've seen applications that put user information into the session variable - that will reduce database hits. If that's not sufficent, look into Memcached. If you have bigger data, maybe Varnish.

Accessing MySQL from PHP and another process at the same time

I'm writing a program that runs (24/7) on a Linux server and adds entries to a MySQL database.
The contents of the database are presented on a web interface with PHP and the user should be able to delete entries using the web interface.
Is it possible to access the database from multiple processes at the same time?
Yes, databases are designed for this purpose quite well. You'll want to keep a few things in mind in your designs:
Concurrency and race conditions on database writes.
Performance.
Separate database permissions for separate applications.
Unless you're doing something like accessing the DB using a singleton, the max number of simultaneous mysql connections php will use is limited in your php.ini. I believe it defaults to 100.
Yes multiple users can access the database at the same time.
You should however take care that the data is consistent.
If you create/edit entry with many small sql statements and in the meantime someone useses the web interface this may lead to some errors.
If you have a simple db this should not be a problem, else you should consider using transactions.
http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html
Yes and there will not be any problems while trying to delete records in the presence of that automated program which runs 24/7 if you are using the InnoDb engine. This is because transactions happen one at a time, one starts after another has finished and the database is consistent everytime.
This answer How to implement the ACID model for a database has many relevant points.
Read about the ACID Properties of a database. A Mysql database with InnoDb engine will take care of all these things for you and you need not worry about that.

Scaling phpBB?

I'm looking to scale an existing phpBB installation by separating the read queries from the write queries to two separate, replicated MySQL servers. Anyone succeeded in doing this, specifically with phpBB?
The biggest concern I have so far is that it seems like the queries are scattered haphazardly throughout the code. I'd love to hear if anyone else did this, and if so, how it went / what was the process.
You could try MySQL Proxy which would be an easy way to split the queries without changing the application.
Just add more RAM. Enough RAM to hold the entire database. You'll be surprised how fast your inefficient script will fly. Memory forgives a lot of database scaling mistakes.
I know this was asked a long time ago, but I'd like to share what I experienced, in case it can help anyone.
If your problem are table locks, and knowing that the default storage engine of phpbb in that day was MyISAM, have you looked at moving to InnoDB storage engine?
Just find out which tables are most frequently locked, and convert those to InnoDB. The sessions table is the first candidate here, although you may want to look at other optimizations (such as storing session data only in memcache or something) if that is your main bottleneck.

Categories