Increase speed on mysql xammp server - php

I have a production application which uses barcode. It's created using PHP/MySQL on xampp server. When there is more than 50000 data on specific 5 tables then its performance slow on input and fetch data. I have already use indexing on columns. Here are more than 50 users, and give input to 15 users. Its installed on vmware with 14 GB ram and xenon processor server.pc configxampp configNow I want to increase innodb_buffer_pool_size.Now it's 2G. But if I increase it 3G or above then my MySQL stop unexpectedly. Can anyone give me solution how can I increase it

Related

Azure MySQL Flexible Server Extremely Slow

I have test instance of Prestashop in AWS on EC2 t3.small (2 x vCPUs, 2GB memory) running Ubuntu with Apache2 + PHP 7.4 + MySQL 5.7.
I have cloned exactly the same setup to Azure App Service with PHP 7.4 (B1 1 vCP, 1.74GB memory) and MySQL Flexible Server 5.7 (Burstable 1 x vCore, 2 GB memory, 640 IOPS). MySQL is accessible via public network. Both AAS and MySQL are on the same region.
Both setups have the same configuration, AWS hosted Prestashop takes on average 2-3 seconds to load any page.The one in Azure takes around 1 minute and 30 seconds to load any page.
Metrics show none of the resources: CPU, memory, IOPS reaching 100% of usage.
I upgraded both Azue App Service to Premium (P1V2) and MySQL to Business Critical Tier (Standard_E2ds 2 vCores, 16 GiB, 5000 iops) and the results are the same.
Prestashop debug mode shows huge amount of time spent on querying.
I also connected to both AWS and Azure MySQL directly and executed the same query.
On average AWS is 3 times faster than Azure one (100ms vs 310ms).
One approach I haven't tried is to set MySQL on VNET, but would that improve the performance at all?
Maybe there is something I'm missing in the setup or maybe MySQL performance in Azure is questionable. I have seen other posts stating that running MySQL in Azure VM gives better performance than using managed one which would be crazy.

Can WAMPServer running on windows 10 handle high traffic?

I am working on an IoT project in which several sensors will send data to my Wamp server through internet and I will log that data in my database.
My question is, can Apache + mySql handle data of these dimension.
There are nearly 800 data coming from sensor over different URL to my server.
Those data needs to be inserted in different table of database.
Now these 800 data comes with frequency of about 5 sec. Data will come 24*7. So on average I will need to fire 800-900 queries every 5 sec.
Would wamp and sql be sufficient to handle these density of data? If not what other server should I use? Or would I need to install some server OS instead of windows?
My PC specs - intel core i7, 16gb ram, 2gb nvidia 1080 graphics
NO!
But that is not WAMPServers fault.
Windows 10 itself can only handle about 20 remove connections, as Windows itself uses some of the max 30 that are allowed.
And before you ask, no, this is not configurable, otherwise noone would buy Windows Server.
However, if you want to use WAMPServer for testing (cause thats what it and XAMPP et al were designed for) then you should not have to many problems. It will cope with the volume of traffic just not 100's of connections at the same time

Slow Queries WordPress Site (500k visitors a month and 150k posts)

I'm running a WordPress site with 500k visitors a month and 150k posts with in average 100 pageviews every second. I am trying to figure out if the load on the server is normal or if there is something I can do to fix the performance issues without increasing the server setup and monthly costs.
Here is the server setup i'm running right now:
2 Front-end servers, Nginx: 2 CPU & 4GB RAM
1 DB server, MariaDB: 8 CPU & 16GB RAM
1 Redis server: 2 CPU & 4GB RAM
The WordPress theme is develop from scratch were I have optimize the queries and minimized the use of plugins (5 plugins in total).
I run Nginx with Reverse Proxy Cache where I cache all pages for 5 minutes to be able to handle peaks in traffic (two daily peaks with 3k visitors in 30 min when sending newsletters).
The MariaDB and Redis server is running Debian with out of the box configuration. The only thing I've changed is innodb_buffer_pool_size = 11G and max_connections = 300 in MariaDB.
The DB CPU is running at 50% when having 100 real time visitors and 85-90% with 300-700 real time visitors.
The problem is that the queries take some time to load (3-6 seconds) even with 50% load at the CPU.
My staging environment is running on the exact same servers but with another database table (same amount of posts) and queries time is 0,5-1,5 seconds.
So the only difference is that the production database have more concurrent users.
What can it be that make the queries take this time to load?
Sounds like you need node-balancers or a beefier server.
You have 500,000 visitors with 100 pageviews a second, Wordpress is infamously clunky and known for serializing objects in the database (every component that displays is a query). So let's say you have a simple website (15 queries) and 5 plugins (10 queries) now multiply 25 by 100 pageviews, and you have 2,500 queries a second on 16 GB of RAM. This means each query gets about 0.0064 GB of memory or 6.4 Megabytes.
Unless your database has tables small enough to fit into 4.25 floppy disks, I recommend more power sir.
Queries using wp_postmeta are slow because of an inefficient schema. You can fix that.
See http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta
This will help more than "throwing hardware at the problem".

need help in rough calculation of concurrent users

I will tell you about website specs for the below configuration.
social networking site- 70%dynamic content
Linux centos 6.6
Apache web server
php language
server specs x 2 (main server and sql server)
4 x Intel® Xeon® E5-4640 v2 2.20GHz, 20M Cache, 8.0GT/s QPI, 10 Core
48 x 16GB (768 GB) RDIMM, 1600MT/s, Low Volt RAM
4 x 300GB 15K RPM SAS 6Gbps
for other storage = Dell Storage Direct-Attached Storage (DAS)
network = 10 gigabit / sec
Assume that memcache / load balancer / other extra servers are there and not included in this.
(just needed rough calculation)
my question is:
how many concurrent users (users that will click at a same time) this platform can handle and assume that average connectivity of users will be 512 kilobit / sec.
concurrent users depends on which factor ? (ram>cpu>hdd is this right?)
I am not an expert , this question is for educational purpose only.
This question is very vague. The load that you can support will depend on the complexity of your PHP code and database design... planning (and even testing) load is a complicated topic.
You could also configure your hardware in a variety of ways which will have an impact on performance. Which RAID system you use will depend on whether your application is read or write heavy, as will your database design.
You will also need to consider whether to use Virtualisation for backup/redundancy which adds a layer of performance overhead...

Process running php.ini Using Up Excessive Memory In CentOS

Bear with me here.
I'm seeing some php.ini processes (?) running or processes touching php.ini that are using up to 80% or more of the CPU and i have no idea what would cause this. All database processing is offloaded on a separate VPS and the whole service is supported by a CDN. I've provided a screenshot of "top -cM"
Setup:
MediaTemple DV level 2 application server (the server we are looking at in the images), 8 cores, 2GB RAM
Mediatemple VE level 1 database server
Cloudflare CDN
CentOS 6.5
NginX
Mysql 5.4, ect
EDIT
I'm seeing about 120K pageviews a day here, with a substantial number of concurrent connections
Where do i start looking to find what is causing this?
Thanks in advance

Categories