how is mysql 200 connections limit applied? - php

I am building an ecommerce site hosted on godaddy shared hosting and I saw that only 200 database connections are allowed.
I am using Codeigniter framework for the site and I have 2 databases for the project.
Database 1 for storing sessions only with a user with read, write, update and delete privileges
Database 2 rest of tables needed for site, with a read only user.
Since 1 website visitor will be connecting to 2 databases does this mean that I can only have 100 visitoras at a time? Since each one will be using 2 connections.
Or can someone explain the 200 connections limit please.

As #Drew said, it depends. Limits exist everywhere (hardware, software, bandwidth etc). GoDaddy has it's own limitations (not only to database connections). Optimizing your code can help you take the maximum of the web-server and database servers.
For example if your code uses the database connection for 1 second to each database, you can serve 100 visitors per second. If you use it for 0.2 of a sec then you can serve 500 visitors every second.
Optimization is necessary especially on heavy web applications. Maybe you could organize your app so it does not need connecting to both databases for every request (this would double the available connectios per time fraction). Optimizing the SQL queries and minimizing JOINing tables will help your app too (also it will make it run faster).
Finally you can use caching, so you will not have your server constructing the same content again and again. This is not a full list of all optimizations you can do, but a start point to do your research and planning. Hope you find it helpful.

Related

Zend Framework Multiple Database Stop Working with slow query

I'm running a big Zend Framework web application with 5 database, independent of each other, distributed on 2 database servers running on Mysql 5.6.36 - CentOS7 with 16gb ram 8 core processor each. However, if one of the 2 database servers stops responding because of slows query, the users on the other server cannot access the web application. The only way to turn on the application is to restart mysql on that server. I try different things without success. The strange thing is that if I turn off one of the servers the system continues to work correctly.
It's hard to offer a meaningful answer, because you've given us no information about your tables or your queries (in fact, you haven't asked a question at all, you've just told us a story! :-).
I will offer a guess that you are using MyISAM for one or more of your tables. This means a query from one client locks the table(s) it queries, and blocks concurrent updates from other clients.
To confirm you have this problem, use SHOW PROCESSLIST on each of your two database servers at the time you experience the contention between web apps. You might see a bunch of queries stuck waiting for a lock (it may appear in the processlist with the state of "Updating").
If so, you might have better luck if you alter your tables' storage engine to InnoDB. See https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html

GAE + Cloud SQL - How to understand different Tiers

Here Google offers different tiers of their google-cloud-sql
I dont understand when someone will need to upgrade the very basic d0 tier.
My questions are:
1) If you are connecting GAE to cloud-sql, will the sql concurrent connections limit the scalability of your GAE app to 250 concurrent requests? I mean, will GAE create a new connection to cloud-sql on every request?
1bis) Can a very requested GAE app use only one sql connection?
2) Could you give some case-scenarios when Dx may be recomendable?
what i dont understand is when someone will need to upgrade the very
basic d0 tier.
When its performance proves insufficient for your workload (number and size of queries) resulting in too-slow responses to user queries (or back-end tasks). https://cloud.google.com/sql/docs/instance-info explains how to view all the info about a given Cloud SQL instance.
1) if you are connecting GAE to cloud-sql, will the sql concurrent
connections limit the scalability of your GAE app to 250 concurrent
requests ? i mean, will GAE create a new connection to cloud-sql on
every request ?
Actually, your PHP code will do that, e.g with a call such as
$sql = new mysqli( ... etc, etc
if and when it needs a Cloud SQL connection to serve a request. I do not believe there can be any way to share a single connection among different servers (and multiple concurrent requests are typically served by different servers -- although if your code is threadsafe a single server might be responding to a few requests concurrently, and I guess you could try to share a single connection among threads with locking, though that might impact latency and would only give you a small amount of connection-reuse anyway).
1bis) can a very requested GAE app use only one sql connection ?
A "very requested GAE app" is no doubt going to be using multiple servers at once, and there is no way separate servers can share 1 mySql connection.
2) could you give some case-scenarios when Dx may be recomendable ?
You'll just want larger instances in proportion to how big/demanding your workload is -- larger databases and indices, big/heavy requests including ones processing or returning lots of data, many concurrent requests, heavy background "data mining" going on at the same time, and so forth.
I would recommend using the calculator at https://cloud.google.com/products/calculator/ -- click on the Cloud SQL icon if that's specifically what you want to explore -- to determine expected monthly costs for an instance.
As for the performance you can expect in return, that's so totally dependent on your data, indices, workloads, &c, that there's really no shortcut for it: rather, I recommend building a minimal meaningful sample of your app's needs and a stress-load test for it, tune it first on a local MySQL installation, then deploy experimentally to Cloud SQL in different configurations to measure the effects.
Once you've gone to the trouble of building and calibrating such benchmarks, you may of course also want to try out other competing providers of "mysql in the cloud" services, to know for sure exactly what performance you're getting for your money -- I'm unfortunately not very knowledgeable about what all is available on the market, but my key message is to use your own benchmarks, built to be meaningful for your app, rather than relying on "canned" benchmarks...

Apache (PHP) & Mysql - Ideal 2 Server Setup

My client currently has only one server with both MySql and Apache running on it, and at busy times of the year they're occasionally seeing Apache fall over as it has so many connections.
They run two applications; their busy public ecommerce PHP based website and their (busy during working hours only) internal order processing type application with 15-20 concurrent users.
I've managed to get them to increase their budget enough to get two servers. I'm considering either:
A) one server running Apache/PHP and the other as a dedicated MySQL server, or
B) one running their public website only, and the other running MySQL and the internal application.
The benefit I see of A) is that Mysql my.cnf can be tuned to use all of the resources of that server, but it has the drawback of only having one Apache instance running.
B) would spread the load on Apache across both servers, but would limit MySQL's resources on that server, even out of working hours when the internal application won't be used.
I just can't decide which way to go with this and would be grateful of any feedback you may have.
Both approaches are wrong.
You have 2 goals here; availability and performance (I'm considering capacity to be an aspect of performance in this context).
To improve availability, you should be ensuring that there is no single point of failure in your architecture. But with the models you propose, you're actually creating multiple single points of failure - hence your 2 server models are less available than your single server.
From a performance point of view, you want to spread the workload across the available resources. You can't move CPU and memory between the servers but you can move the traffic.
Hence the optimal solution is to run both applications on both servers. Setting up MySQL clustering is a bit more complex, but probably the out-of-the-box asynch replication will be adequate - with the nodes configured as master-master (but writes from the 2 applications targeted sensibly).
There's probably a lot of scope for increasing the capacity of the system further but without a lot more detail (more than is appropriate in this forum, and possibly more than your client is comfortable payng for) it is hard to advise.

Maximum databases on mysql server and security

I have 9 databases on an MYSQL server right now. I configured them using the command line, adding a user and giving that user only full permissions to a specific database. The mysql server has 512MB ram right now, and i don't know if i should be worried security wise of any problems that might arise with 9 databases on one server. Should I split it up into two servers each with about 4 to 5 at most? I have 2 other app servers running to handle the load of the websites, but those 2 servers hit the database server for everything. So far, no problems. I have all 3 servers set with IP restrictions (iptables and other firewall), so hacking from elsewhere isn't possible, but only from the apps themselves.
Since I created the users each with a restriction to a specific database, a hacker who hacks one can't get to the rest, i assume?
Thanks!
The mysql server has 512MB ram right now, and i don't know if i should
be worried security wise of any problems that might arise with 9
databases on one server.
Is the server cup running at 100% all the time?
Are there a lot of slow queries?
This could indicate that the server needs more resources.
You can also check the size of the InnoDB Buffer Usage. Increasing this is often a good way to relieve some pressure.
I have 2 other app servers running to handle the load of the websites,
but those 2 servers hit the database server for everything. So far, no
problems. I have all 3 servers set with IP restrictions (iptables and
other firewall), so hacking from elsewhere isn't possible, but only
from the apps themselves
This is good. That way no one can access your db server directly only through the app servers.
Since I created the users each with a restriction to a specific
database, a hacker who hacks one can't get to the rest, i assume?
Correct.

Website slow response (all other users) when large MySQL query running

This may seem like an obvious question but we have a PHP/MySQL app that runs on Windows 2008 server. The server has about 10 different sites running from it in total. Admin options on the site in question allow an administrator to run reports (through the site) which are huge and can take about 10mins in some cases. These reports are huge mysql queries that display the data on screen. When these reports are running the entire site goes slow for all users. So my questions are:
Is there a simple way to allocate server resources so if a (website) administrator runs reports, other users can still access the site without performance issues?
Even though running the report kills the website for all users of that site, it doesn't affect other sites on the same server. Why is that?
As mentioned, the report can take about 10 minutes to generate - is
it bad practice to make these kinds of reports available on the
website? Would these typically be generated by overnight scheduled tasks?
Many thanks in advance.
The load your putting on the server will most likely have nothing to do with the applications but the mysql table that you are probably slamming. Most people get around this by generating reports in down time or using mysql replication to have a second database which is used purely for reporting.
I recommend trying to get some server monitoring to see what is actually going on. I think Newrelic just released windows versions of its platform and you can try it out for free for 30 days i think.
There's the LOW_PRIORITY flag, but I'm not sure whether that would have any positive effect, since it's most likely a table / row locking issue that you're experiencing. You can get an idea of what's going on by using the SHOW PROCESSLIST; query.
If other websites run fine, it's even more likely that this is due to database locks (causing your web processes to wait for the lock to get released).
Lastly, it's always advisable to run big reporting queries overnight (or when the server load is minimal). Having a read replicated slave would also help.
I strongly suggest you install a replicated MySQL server, then running large administrator queries (SELECT only naturally) on it, to avoid the burden of having your website blocked!
If there's not too much transaction per second, you could even run the replica on a desktop computer remotely from your production server, and thus have a backup off-site of your DB!
Are 100% sure you have added all necessary indexes?
You need to have a insanely large website to have this kinds of problems unless you are missing indexes.
Make sure you have the right indexing and make sure you do not have connection fields of varchar, not very fast.
I have a database with quite a few large tables and millions of records that is working 24/7.
Has loads of activity and automated services processing it without issues due to proper indexing.

Categories