Slow Performance on Scaled PHP/MySQL Application on Openshift - php

I have two scaled PHP installations on Openshift using Small Gears (1 for PHP, 1 for MySQL per application so 4 small gears all together) and both installations are extremely slow, and only occasionally bearable. I've seen a lot of questions about this that say to disable functionality (like Wordpress Plugins for example), but my application has worked SUPER fast on shared hosting using the current feature-set and can't exist without the features currently in use.
My hope in using Openshift was to avoid having to do server config and have an instantly scalable platform. So my question is what needs to be done to get this config up to speed so I can continue using Openshift? Or is it not possible without the same amount of time and energy it would take me to just setup my own AWS resources?
Should I be using Medium or Large Gears for my application to have the speed necessary to run a real-world application (even though there's only one user)?
For additional information:
I am currently the only user (no one else is accessing the site).
I frequently get HTTP 503 errors (after refreshing a few times these go away)
Using ssh and looking at the gear's resources I never come close to pegging the CPU and am always within the 20%-50% threshold on the CPU.
Memory usage is reporting 7416848k used of 7513696k available with the httpd processes taking a total of 1% of memory
I have done 0 customization, all I did was start the app with php 5.4 and MySQL (scaled) and did a git push of my code.

Related

MySQL overhead - how to tune up server to speed up bad queries

I have a shop on PHP platform (bad developed) where there are a lot of bad queries (long queries without indexes, order by rand(), dynamic counting, ..)
I do not have the possibility to change the queries for now, but I have to tune the server to stay alive.
I tried everything I know, I had very big databases, which I optimized with changing MySQL engine and tune it, but it is first project, where everything goes down.
Current situation:
1. Frontend (PHP) - AWS m4.large instance, Amazon Linux (latest), PHP 7.0 with opcache enabled (apache module), mod_pagespeed enabled with external Memcached on AWS Elasticache (t2.micro, 5% average load)
2. SQL - Percona DB with TokuDB engine on AWS c4.xlarge instance (50% average load)
I need to lower the instances, mainly the c4.xlarge, but if I switch down to c4.large, after a few hours there is a big overload.
Database has only 300MB of data, I tried query cache, InnoDB, XtraDB, ... with no success, and the traffic is also very low. PHP uses MySQLi extension.
Do you know any possible solution, how to lower the load without refactoring the queries?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted to allow analysis of activity and configuration requests. 3 minutes of your GENERAL_LOG during a busy time would be interesting and could identify troublesome/time consuming SELECT's, etc.

Sugar CRM - Performance Issues

I am trying to identify why my Sugar CRM sites are loading so slowly. I am hosting 22 sites on IIS, PHP version is 5.3.26, and my databases are on a seperate SQL Server 2008. The Web Server is running Windows 2008, IIS7, 10GB memory and has a Intel® Xeon® Processor E7-2870.
After profiling one of my databaes I have ruled out the issue to be data related as queries were consistently running in less than 1 second.
My hosting provider has confirmed that we have 100mb dedicated line to our web server. After running a speed test, I get around 70 mb down and 40 mb up, so I do not think this is a bandwidth issue.
I took a look at an offical 'Performance Tweaks' article, and made changes to the config_override.php as suggested, however this did not make a signifcant difference.
http://support.sugarcrm.com/04_Find_Answers/02KB/02Administration/100Troubleshooting/Common_Performance_Tweaks/
Something I have noticed is that there is an awful lot of PHP-CGI.EXE proccesses. As I look at the server now the average CPU consumption for one of these instances is 11%. I am not sure if this is something to be concerned about? The CPU Usage in Windows Task Manager looks very unstable.
To see if it was genernal PHP issue, I added a simple PHP script containing "echo (5 % 3)."\n";" - which was returned instantly.
To summarise web pages are taking on average 5 seconds to load and users are reporting the system as usable but slugglish.
Does anyone have any suggestions of how I might be able to speed up my application?
Thanks
What does the page load time show at the bottom of the page in SugarCRM? If it shows something like 0-2s, but the page in reality takes much longer then look at adding Opcode caching such as APC or Memcache.

Heroku 1x vs 2x web dynos

I've been running some Facebook apps in Heroku-hosted environment with a fair bit of traffic .
These are simple apps using 2 php files (a one-page app plus an AJAX entry-point for data), which are very simple and place no real demands on server memory, the most resources are taken up serving images and the heaviest thing they'll do in terms of CPU load is a curl request to a web API, or a call to a database to get some data.
Due to the traffic and demands on server concurrency (10-20 dynos on average) I've been doing research into how to configure an app for maximum performance, and found that the biggest bottleneck comes from a limitation imposed by Heroku's default boot.sh apache deploy script which sets MaxClients=1 on my application by default.
This has the effect of limiting Apache to 1 thread for handling HTTP requests.
In come the new and improved 2x dynos which cost twice as much and promise twice as much RAM and CPU performance.
Now I can understand that RAM won't make much of a difference when the main bottleneck is handling of HTTP requests, but I would assume the new dynos will set MaxClients=2 (havent been able to check yet) and so I'm wondering whether I'd be better off running my app with half as many 2x dynos than the amount of 1x dynos I normally use. Anyone know the answer?
Upping to a 2x dyno won't change your MaxClients. You need to change your application configuration to support more than one concurrent request.

Performance testing my PHP / MySQL Application for AWS

I am writing a PHP application which uses MySQL in the backend. I am expecting about 800 users a second to be hitting our servers, with requests coming from an iOS app.
The application is spread out over about 8 diffrenet PHP scripts which are doing very simple SELECT queries (occasionally with 1 join) and simple INSERT queries where I'm only inserting one row at a time (with less than 10kb of data per row on average). Theres about a 50/50 split between SELECTS and INSERTS.
The plan is to use Amazon Web Services and host the application on EC2s to spread the CPU load and RDS (with MySQL) to handle the database, but I'm aware RDS doesn't scale out, only up. So, before committing to an AWS solution, I need to benchmark my application on our development server (not a million miles of the medium RDS solution spec) to see roughly how many requests a second my application and MySQL can handle (for ballpark figures) - before doing an actual benchmark on AWS itself.
I believe I only really need to performance test the queries within the PHP, as EC2 should handle the CPU load, but I do need to see if / how RDS (MySQL) copes under that many users.
Any advice on how to handle this situation would be appreciated.
Thank-you in advance!
Have you considered using Apache Benchmark? Should do the job here. Also I've heard good things about siege but haven't tested yet.
If you have 800 user hits per second, it could be a good idea to consider sharding to begin with. Designing and implementing sharding right at the beginning will allow you to start with a small number of hosts, and then scale out more easily later. If you design for only one server, even if for now it will handle the load, pretty soon you will need to scale up and then it will be much more complex to switch to a sharding architecture when the application is already in production.

Optimize Server for Traffic Spikes (Drupal)

We have a fairly intensive Drupal installation running on a large EC2 server instance. The system takes registrations for campaigns that we host, and when we send out an invitation email the numbers of responses spike to around 1,000 per minute for the first ten or twenty minutes.
The system is running on a fairly standard LAMP installation, using the latest version of Drupal 7. I guess I have three questions:
1.) Should this amount of load be maxing out this size server? The install has the Organic Groups, Tokens, and Webforms modules running.
2.) Are there Mysql/Apache server tweaks that will minimize the amount of load per connection—shortening 'keep alive' time, etc.?
3.) Are there Drupal tweaks that we should implement to do something similar—maybe consolidating SQL calls?
I know this is a lot. Any ideas, suggestions, or critisisms will help.
Thanks!
For high server load with Drupal, combination of NGINX+PHP-FPM+APC+MySQL+VARNISH will be the best option.
If you are using multiple system behind ELB, you can use MEMCACHE instead of APC. You should use MySQL Replication in your architecture along with load balancer.
You can tune the server configuration of NGINX and PHP-FPM separately. If any process of PHP-FPM is taking time you can terminate that particular request (request_terminate_timeout, request_slowlog_timeout ). You can check all the configuration related info here PHP-FPM configuration
Also if you are using AWS, you should try to utilize their other services too in your architecture like S3 as CDN, ElastiCache, Cloudfront etc.
You can also use Google Page Speed service

Categories