Average Response time Increasing at Stress testing - php

I am using CodeIgniter for my API implementation. Please find the server resources and technologies used as follows :
SUMMARY
Framework : CodeIgniter
Database : MySQL (Hosted on RDS) (1 MASTER & 2 SLAVE)
Hosting : AWS t2.Micro
Web Server : Nginx
Following is the Report of the LOADER.IO report as per my test.
My API MIN RESPONSE TIME : 383 MS
NUMBER OF HITS : 10000 / 1 MIN CONCURRENT
As you can see in below image AVERAGE RESPONSE is 6436 MS .
I am expecting at least 100000 / 1 MIN users for watching an event on my application.
I would appreciate if anybody can help with some OPTIMIZATIONS Suggestions.
MAJOR THINGS I have done so far
1) SWITCHED TO NGINX FROM APACHE
2) MASTER / SLAVE Configuration (1 MASTER , 2 SLAVE)
3) CHECKED each INDEX in USER JOURNEY in the application
4) CODE OPTIMIZATION : AS you can see 383 MS is a good Response time of an API
5) USED EXPLAIN of MYSQL for checking the explaination of Queries

I would suggest you to focus to tune your mysql to get faster queries execution and thus you can save time.
To to this, I would suggest to do the following:
You can setup them in /etc/my.cnf (Red Hat) or /etc/mysql/my.cnf (Debian) file:
# vi /etc/my.cnf
And then append following directives:
query_cache_size = 268435456
query_cache_type=1
query_cache_limit=1048576
In above example the maximum size of individual query results that can be cached set to 1048576 using query_cache_limit system variable. Memory size in Kb.
These changes will make your queries to give faster results by caching frequently executing queries result and it also updates its cached result when any rows will get updated. This will be done by mysql engine and this is how you can save time.
ONE MORE SUGGESTION:
As you are using t2.micro you would get 1Gig of RAM and 1 CPU. So I would suggest to go with t2.medium which will give you 4.0 GiB RAM and 2 CPU.

For 1667 SELECTs per second, you may need to have multiple Slaves. With such, you can scale arbitrarily far.
However, it may be that the SELECTs can be made efficient enough to not need the extra Slaves. Let's see the queries. Please include SHOW CREATE TABLE and EXPLAIN SELECT ....
It is possible to run thousands of simple queries per second.
"100000 / 1 MIN" -- Is that 100K connections? Or 100K queries from a smaller number of connections? There is a big difference -- establishing a connection is more costly than performing a simple query. Also, having 100K simultaneous connections is more than I have every heard of. (And I have seen thousands of servers. I have seen 10K connections (high-water-mark) and 3K "Threads_connected" -- both were in deep do-do for various reasons. I have almost never seen more than 200 "Threads_running" -- that is actual queries being performed simultaneously; that is too many for stability.)
Ouch -- With the query_cache_size at 256MB on 1GB of RAM, you don't have room for anything else! That is a tiny server. Even on a larger server do not set that tunable to more than 50M. Otherwise the "pruning" slows things down more than the QC speeds them up!
And, how big are the tables in question?
And, SHOW VARIABLES LIKE '%buffer%';
And, what version are you running? Version 5.7 is rated at about 64 simultaneous queries before the throughput stops improving, and (instead), response time heads for infinity.
To do realistic benchmarking, you need to provide realistic values for
How often a query is issued. (Benchmark programs tend to throw queries at the server one after another; this is not realistic.)
How long a query takes.
How many connections are involved. (I claim that 100K is not realistic.)
The heavy-hitters deliver millions of web pages per day. The typical page involves: connect, do a few queries, build html, disconnect -- all (typically) in less than a second. But only a small fraction of the time is any running. That is, 100 connections may equate to 0-5 queries running at any instant.
Please talk about Queries per second that need to be run. And please limit the number of queries run simultaneously.

Related

How to efficiently handle MySQL connections with PHP

I'm running an app that interacts with a mysql database using the native Mysql PDO, I'm not using laravel or any framework.
So most of the APIs logic is to fetch from DB and/or to insert.
when running on production I can see high memory usage from mysql tasks, check below:
I have a couple of questions.
Is such high memory usage normal? and what's the best practice to manage a proper PHP-MysQL connection in a multi-threaded production-level app?
When an intensive query is running (fetching historical data to plot a graph), the CPU usage jumps to 100% until the query execution finishes it returns back to 2-3%. But during that time the system is completely paused.
I'm thinking of hardware based solutions, such as separating the db server from the app server (currently they both run on the same node) And managing a cluster and using read-only nodes.
But i'd like to know if there are other options, and what's the most efficient way to handle PHP-MySQL connections.
You could also check the query written if there are fully optimized, and connection are closed when not used.
Also you can reduce the load on the mysql server by balancing some work to php.
Also take a look at your query_cache_size, innodb_io_capacity, innodb_buffer_pool_size and max_connection setting in your my.cnf.
Also sometimes upgrading php and doing some caching on your apache can help reduce of ram uses.
https://phoenixnap.com/kb/improve-mysql-performance-tuning-optimization
Normally, innodb_buffer_pool_size is set to about 70% of available RAM, and MySQL will consume that, plus other space that it needs. Scratch that; you have only 1GB of RAM? The buffer_pool needs to be a smaller percentage. It seems that the current value is pretty good.
You top show only 37.2% of RAM in used by MySQL -- the many threads are sharing the same memory.
Sort by memory (in top, use < or >). Something else is consuming a bunch of RAM.
query_cache_size should be zero.
max_connections should be, say, 10 or 20.
CPU -- You have only one core? So a big query hogging the CPU is to be expected. It also says that I/O was not an issue. You could show us the query, plus SHOW CREATE TABLE and EXPLAIN SELECT...; perhaps we can suggest how to speed it up.

My server is getting high traffic and reaches his limits, what should be my new structure?

Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.

MySQL overhead - how to tune up server to speed up bad queries

I have a shop on PHP platform (bad developed) where there are a lot of bad queries (long queries without indexes, order by rand(), dynamic counting, ..)
I do not have the possibility to change the queries for now, but I have to tune the server to stay alive.
I tried everything I know, I had very big databases, which I optimized with changing MySQL engine and tune it, but it is first project, where everything goes down.
Current situation:
1. Frontend (PHP) - AWS m4.large instance, Amazon Linux (latest), PHP 7.0 with opcache enabled (apache module), mod_pagespeed enabled with external Memcached on AWS Elasticache (t2.micro, 5% average load)
2. SQL - Percona DB with TokuDB engine on AWS c4.xlarge instance (50% average load)
I need to lower the instances, mainly the c4.xlarge, but if I switch down to c4.large, after a few hours there is a big overload.
Database has only 300MB of data, I tried query cache, InnoDB, XtraDB, ... with no success, and the traffic is also very low. PHP uses MySQLi extension.
Do you know any possible solution, how to lower the load without refactoring the queries?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted to allow analysis of activity and configuration requests. 3 minutes of your GENERAL_LOG during a busy time would be interesting and could identify troublesome/time consuming SELECT's, etc.

Why doesn't my server attempt to use more resources for a heavy mysql search?

I have a mysql table with about 90k rows. I have a routine I've written which loops through each one of these rows, and then crosschecks the results within another table with about 90k rows. If there is a match, I delete one of the rows. All the columns I'm cross checking I've made indexes in mysql.
When I run this script on a dedicated local server with 2 x quad 2.4ghz intel xeon, 24gb of ram (with php memory_limit set to 12288m), and with an SSD, the whole script takes about a minute to complete. I would imagine then that the servers resources are maxing out, but actually CPU is about 93% idle, ram is utilising about 6% and I'm looking at Read/Writes on the SSD and it's like not much is happening at all.
I mentioned the problem to somebody else who said that the problem is I'm executing a single-threaded process and wondering why it's not using all 8 processors, but even so, is checking through a mysql table 90k times really a big deal? Wouldn't at least one CPU be running at max?
Why doesn't my server attempt to throw more resources at the script when I run it? Or, how can I unleash more resources so that my local web app runs not like a low spec'd VPS?
Depending on the size of the rows, 90K rows isn't a whole lot. Odds are they're all cached in RAM.
As for the CPUs, your process is not quite single threaded, but it's pretty close. Both your process and the DB server are separate processes, the problem is of course that your process stops while the DB server processes the request, so whatever core has your process scheduled shuts down just as the one with DB spools up.
As the commenter mentioned, it's likely you can do this more efficiently by offloading most of the processing to the DB server. Most of your time is just in statement overhead sending 90K SQL statements to the server.

User File Uploads From Browser Slow Entire Server (web app)

SEE THE LAST UPDATE AT THE BOTTOM FOR MY FINAL EVAL / SUGGESTIONS
This seems so basic that it should be a common problem.. but I've already searched for anything pertaining to this issue with no luck
-- Scenario --
I have a web application that, as one of it's functions, allows users to upload photos to the server. File size limits aren't the issue, but i can notice a visible difference in the speed of the server when i'm uploading a file vs not.
-- Testing --
I uploaded a 3MB file while signed in to another account (on another computer completely) to test the page load times in firebug. Caching has been disabled. The results are below:
Baseline page speed (without upload): 0.409, 0.449, 0.468
During 3MB file upload:1.28, 8.58, --upload complete -- 0.379
This problem obviously compounds if more than one user is uploading a photo at the same time. This seems insane considering all the power i have with the current setup.
-- Setup --
Mediatemple DV Level 3 Application server (4GB ram, 16 cores)
Mediatemple DV Dev level 1 database server (running mysiam tables)
Cloudflare CDN
Custom PHP application
Wordpress sales website (front end, same server - not connected in any way to the web app)
CentOS 6.5
Mysql 5.5
-- So Far --
I had the cloudtech team at MT tune the apache & nginx settings for me since i thought i had screwed this up, but the issue is still showing up
I am considering changing all the DB tables to innodb for concurrency, but this is not related to the question at hand
The server load averages do not seem to be significantly affected when uploading my test file
the output of "free -m" is below
total used free shared buffers cached
Mem: 4096 634 3461 0 0 215
-/+ buffers/cache: 419 3676
Swap: 1536 36 1499
EDIT 1
Is it possible to offload these types of things to an independent server? I realize the PHP used to upload the file would also have to be run from that server, but at least then only the upload / long process server would be affected and not the entire application.
Also, is there any other language or workflow that would work better for this? Perl? Python?
EDIT 2 (2014-08-28)
I forgot to mention two things
1) this issue isn't just with file uploads - it happens whenever a php script runs for an extended time. As a test, i ran a 3 minute php script on my end and sure enough, got a phone call from a client during the execution about the "slow" system.
2) I have high concurrent log in sessions running. Many of these users are likely on the same script at the same time.
Here is the output from htop. The "php-cgi" processes are the obvious offenders, but i don't know how to see which scripts are causing this load. Again, even if i did find the script, i feel like i should be able to run more than a handful of php scripts at a time.
EDIT 3 (2014-08-28)
Here is the htop at peak hours (right now). What's annoying is that the system is flying at the moment with 2x the traffic and processes.
EDIT 4 / UPDATE (2014-09-30)
After a month of staring at this, I've found some things to be true. I'm hoping some of this will help others get their high-growth applications in check before it turns into an issue of racing traffic with server upgrades (which is what happened here).
The Issue I was using MyISAM as the exclusive database engine. I had read through hundreds of docs and forum posts regarding whether InnoDB or MyISAM is the better engine to use, most sources giving vague evaluations or (at best) overly complicated benchmarking with vague settings claiming to increase (or even decrease..?) performance. Forget it all and USE InnoDB FOR ALL APPLICATIONS
Find a good resource to help you tune your MySQL server settings and run with it (see links below). Apparently the concurrent traffic on the server was overloading PHP while waiting for the table locks to release in MyISAM. This was causing excessive loads on the application server, while the DB server was just hanging out with hardly any CPU or MEM load. Transitioning to InnoDB allows high-concurrency at the cost of CPU and Memory (both GOOD things, buy a bigger DB server if you have to).
In the end, the load has transferred to the DB server, increasing concurrent traffic performance. To summarize DON'T USE MyISAM ON WEB APPLICATIONS. Period. I'm sure i'll get burned a bit by saying that, but the slight performance hit (yes, MyISAM is a BIT faster at low-concurrency, but who cares if your web app crashes) is well worth the increase in concurrency.
IMPORTANT POINTS
1) When you move your database over to InnoDB "ALTER TABLE my_table SET ENGINE="InnoDB" you will need to use the info found in the following links to set your innodb specific settings.
2) Be sure to code a loop wrapper in PHP (3 iterations sounds good) for your query calls to account for deadlocks (a situation where queries are basically competing for the same row(s) and one or both are stalled completely).
3) Write your PHP to look for ERRORS coming out of your new query wrapper.
EG: Send the query -> Error found -> Check for deadlock -> if deadlock, retry query after waiting 0.1 sec -> check again, if error found that isn't deadlock, return error to app, else continue until iteration limit is reached (3 in this example) -> if iteration limit is hit, return error to application
Do something now that your query has failed.
Helpful Links
http://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
Handling innoDB deadlock
https://dba.stackexchange.com/questions/27328/how-large-should-be-mysql-innodb-buffer-pool-size
CAUTION
I crashed the MySQL server with the setting in the following link. You MUST completely stop mysqld (service mysqld stop) before renaming (NOT deleting) the original files (ib_logfile0, ib_logfile1), usually found here on RH/CentOS
/var/lib/mysql/ib_logfile0
http://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
Once they are renamed, start your mysql daemon service mysqld start
Hopefully this helps someone,
-B

Categories