Azure MySQL Flexible Server Extremely Slow - php

I have test instance of Prestashop in AWS on EC2 t3.small (2 x vCPUs, 2GB memory) running Ubuntu with Apache2 + PHP 7.4 + MySQL 5.7.
I have cloned exactly the same setup to Azure App Service with PHP 7.4 (B1 1 vCP, 1.74GB memory) and MySQL Flexible Server 5.7 (Burstable 1 x vCore, 2 GB memory, 640 IOPS). MySQL is accessible via public network. Both AAS and MySQL are on the same region.
Both setups have the same configuration, AWS hosted Prestashop takes on average 2-3 seconds to load any page.The one in Azure takes around 1 minute and 30 seconds to load any page.
Metrics show none of the resources: CPU, memory, IOPS reaching 100% of usage.
I upgraded both Azue App Service to Premium (P1V2) and MySQL to Business Critical Tier (Standard_E2ds 2 vCores, 16 GiB, 5000 iops) and the results are the same.
Prestashop debug mode shows huge amount of time spent on querying.
I also connected to both AWS and Azure MySQL directly and executed the same query.
On average AWS is 3 times faster than Azure one (100ms vs 310ms).
One approach I haven't tried is to set MySQL on VNET, but would that improve the performance at all?
Maybe there is something I'm missing in the setup or maybe MySQL performance in Azure is questionable. I have seen other posts stating that running MySQL in Azure VM gives better performance than using managed one which would be crazy.

Related

Can WAMPServer running on windows 10 handle high traffic?

I am working on an IoT project in which several sensors will send data to my Wamp server through internet and I will log that data in my database.
My question is, can Apache + mySql handle data of these dimension.
There are nearly 800 data coming from sensor over different URL to my server.
Those data needs to be inserted in different table of database.
Now these 800 data comes with frequency of about 5 sec. Data will come 24*7. So on average I will need to fire 800-900 queries every 5 sec.
Would wamp and sql be sufficient to handle these density of data? If not what other server should I use? Or would I need to install some server OS instead of windows?
My PC specs - intel core i7, 16gb ram, 2gb nvidia 1080 graphics
NO!
But that is not WAMPServers fault.
Windows 10 itself can only handle about 20 remove connections, as Windows itself uses some of the max 30 that are allowed.
And before you ask, no, this is not configurable, otherwise noone would buy Windows Server.
However, if you want to use WAMPServer for testing (cause thats what it and XAMPP et al were designed for) then you should not have to many problems. It will cope with the volume of traffic just not 100's of connections at the same time

AWS RDS goes 100% with number of DB connections

I have deployed a Laravel 4.2 based project on AWS.
I am facing an issue, when there is a load on my site RDS touches 100% CPU utilization.
For example, last time we have a load when there were 422 DB connections.
The strange part is at the same time Write IOPS and Read IOPS are just normal, slow query logs are fine.
I need a direction, how to debug this issue. I want to know which thing is causing high CPU utilization.
Thanks

MySQL overhead - how to tune up server to speed up bad queries

I have a shop on PHP platform (bad developed) where there are a lot of bad queries (long queries without indexes, order by rand(), dynamic counting, ..)
I do not have the possibility to change the queries for now, but I have to tune the server to stay alive.
I tried everything I know, I had very big databases, which I optimized with changing MySQL engine and tune it, but it is first project, where everything goes down.
Current situation:
1. Frontend (PHP) - AWS m4.large instance, Amazon Linux (latest), PHP 7.0 with opcache enabled (apache module), mod_pagespeed enabled with external Memcached on AWS Elasticache (t2.micro, 5% average load)
2. SQL - Percona DB with TokuDB engine on AWS c4.xlarge instance (50% average load)
I need to lower the instances, mainly the c4.xlarge, but if I switch down to c4.large, after a few hours there is a big overload.
Database has only 300MB of data, I tried query cache, InnoDB, XtraDB, ... with no success, and the traffic is also very low. PHP uses MySQLi extension.
Do you know any possible solution, how to lower the load without refactoring the queries?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted to allow analysis of activity and configuration requests. 3 minutes of your GENERAL_LOG during a busy time would be interesting and could identify troublesome/time consuming SELECT's, etc.

Process running php.ini Using Up Excessive Memory In CentOS

Bear with me here.
I'm seeing some php.ini processes (?) running or processes touching php.ini that are using up to 80% or more of the CPU and i have no idea what would cause this. All database processing is offloaded on a separate VPS and the whole service is supported by a CDN. I've provided a screenshot of "top -cM"
Setup:
MediaTemple DV level 2 application server (the server we are looking at in the images), 8 cores, 2GB RAM
Mediatemple VE level 1 database server
Cloudflare CDN
CentOS 6.5
NginX
Mysql 5.4, ect
EDIT
I'm seeing about 120K pageviews a day here, with a substantial number of concurrent connections
Where do i start looking to find what is causing this?
Thanks in advance

Slow Performance on Scaled PHP/MySQL Application on Openshift

I have two scaled PHP installations on Openshift using Small Gears (1 for PHP, 1 for MySQL per application so 4 small gears all together) and both installations are extremely slow, and only occasionally bearable. I've seen a lot of questions about this that say to disable functionality (like Wordpress Plugins for example), but my application has worked SUPER fast on shared hosting using the current feature-set and can't exist without the features currently in use.
My hope in using Openshift was to avoid having to do server config and have an instantly scalable platform. So my question is what needs to be done to get this config up to speed so I can continue using Openshift? Or is it not possible without the same amount of time and energy it would take me to just setup my own AWS resources?
Should I be using Medium or Large Gears for my application to have the speed necessary to run a real-world application (even though there's only one user)?
For additional information:
I am currently the only user (no one else is accessing the site).
I frequently get HTTP 503 errors (after refreshing a few times these go away)
Using ssh and looking at the gear's resources I never come close to pegging the CPU and am always within the 20%-50% threshold on the CPU.
Memory usage is reporting 7416848k used of 7513696k available with the httpd processes taking a total of 1% of memory
I have done 0 customization, all I did was start the app with php 5.4 and MySQL (scaled) and did a git push of my code.

Categories