I have an ubuntu server that I've setup with PHP7 and nginx. It's running a Laravel application, and this runs on AWS. The MySQL DB is on RDS.
I've provisioned this server using Ansible. It's a pretty straightforward configuration. I have opcache enabled and ample php child processes. I also installed mysql-client to interface with RDS.
The issue: prior to provisioning this box, I was using a pre-provisioned box (from Laravel Forge). My new box seems to be MANY times slower than the pre-made box I previously used (5 to 8x slower!)
I've compared the settings between the two boxes and they are more or less the same, so I can't really figure out what would be causing this.
An example is: if I benchmark a particular piece of code on one of the more taxing endpoints, the actual sql queries are fast in both cases, but the new box seems to take an incredible amount of time populating the objects in Laravel, or maybe the application is getting queued up to wait for the database connection. It's hard to say. But the old box takes 50ms and the new one takes 1200ms.
My question is: what sorts of places should I be looking to find this issue, which seems to be in the server configuration? From what I can see, nginx and php seem properly configured. The issue happens when there's zero traffic, so I don't think it's a scaling problem. I'm wondering if there's some mysql-client configuration, or some mysql-php configuration that I could be overlooking. Any suggestions, or tools for debugging something like this would be appreciated.
Related
I am using out of the box Laravel 9 / php 8.1 combo and loading initial page for 190ms sounds horrible. Last project I built used laravel7 and the response times in development including database queries - talking complex ones were less than 20ms
Currently I am trying to set it up, so the tests were simply loading the default route or just a route returning a string.
I tested several instances
Custom docker container I've built (amazonlinux, php81, php81-fpm / nginx ) on an old server
Local Windows 10 instance / php81 / artisan serve
Pre-built container by bitnami
All of them showed responses ranging from 80ms to 2200ms
My custom container because of the optimizations showed responses of 60ms - 80ms, but on the 3rd-4th refresh it pulled a 450-900ms response time
On Windows 10 instance there was stable response, but horribly slow ~85-130ms
Pre-built container was also stable on around 190ms
You can understand the horror as those are simply empty frameworks deployed.
My test case where a database from another container was pulled to select 1-100 records didn't change the response times. Database was pulled in less than 10ms.
For comparisson, because I started reviewing php-fpm configuration (still out of the box on all test cases) I ran some benchmarks if this is the bottleneck, but it was performing flawless. Ops/sec were as described on normal php8.1 benchmarks
Strange thing I've noticed was within Xdebug on my custom container, there were 2300 more records below this.
Did you try to change your WSL config or install other distro like ubuntu etc? I had a similar issue on windows and NTFS issue. I changed my distro to ubuntu LTS and become faster like a normal website.
So for now I've concluded that the main issue is mainly with Docker and the fact that laravel was sitting in an folder exposed to the machine.
The responses are delayed because the performance of those folders is literally slowed down 60 times.
After further testing this is what it all comes down to. I am upgrading the hardware from a server HDD to m.2 to ignore partially this slowdown.
My tests showed 1.83 mb/s transfer rate for the shared folder on the server compared to 110 mb/s on the dedicated server.
I'm tasked with maintaining several web apps, all of them using the LAMP stack. Some of them run on PHP 5.6, some of them on PHP 7.0, some using Wordpress, some using Symfony... Ideally, I'd like to set up at home testing/development environments that are as identical possible as the production ones.
I've been investigating Docker (warning: total novice here!) to see if it suits my needs. I'll be working on Windows and Mac, and I'd like to have in my machine several LAMP environments, each of them with their version of PHP/MySQL/etc., isolated from each other and all of them running in the same VM (because otherwise I might as well just use what I'm familiar with and set up different VMs). Can Docker do this?
(Sorry if this sounds like a silly question: reading about Docker, my impression was that the container philosophy allowed you precisely to do what I described without wasting resources like with VMs, and yet, I haven't found any guides about running more than one LAMP environment at the same time).
Php Docker Stack
Php Docker stack to run Php Apps in Production and Development, using Docker Compose Services to run any Php version, Databases, Cache, Queues, Logs and much more...
From now on, no need to keep messing around the Operating System to have a full development stack ready to build our awesome Php Apps.
It can be included in each php project via composer:
https://packagist.org/packages/exadra37-docker/php-docker-stack
Or if you prefer you can clone it directly from here.
It comes with some default images for each service but is all configurable via .env, thus we can pass any docker image we want for any of the supported services in the docker compose file.
Php Docker Stack Services:
Http - Nginx, Apache, etc.
Php - Php-Fpm.
Database - Perconna, MariaDB, Mysql, etc.
Cache - Redis, MemCached, etc.
Logs - Logstash -> ElasticSearch <- Kibana.
Queue - Beenstalkd, RabbitMQ, ActiveMQ, Apache Kafka, etc.
Cron Jobs - Just to schedule cron jobs.
Dev CLI - Access to the container shell.
Database CLI - Like the awesome mycli propmt for mysql.
Cache CLI - Like the redis cli.
I was using it daily at my old job for development.
I am the author of it and I have some local enhancements that need to be polished and merged upstream.
Feel free to try it and reach to me for any doubts or feedback.
Okay, after a lot of time, I thought I should share the solution I found and that I'm currently using: devilbox. It's awesome, and once you get your head around it, it's incredibly powerful, flexible and customisable.
Kinda struggeling with mysql again...
Setup: vServer with 4 Cores, 1 gig RAM. Ubuntu 12.04 LTS, Serverpilot installed Apache (behind NGinx), PHP and MySQL.
When I run a script (runs kinda long, a few hours), it uses exactly 1 core - about 70% php, the rest mysql. Not stable of course, sometimes mysql isn't active at all, etc.
Is there a way to make mysql run on a different core? Connect to mysql via external IP does not solve it... ;-)
Thanks a lot!
I fear that your problem is that PHP is not a Threaded language. There are no good ways to do parallel processing in PHP (pthreads looks promising, but I was not able to build it successfully last I tried), which is probably what you really would need to speed up this script.
Check out numactl (http://linux.die.net/man/8/numactl) which you can include in your launcher for mysql. You could also use cset and taskset depending on what your distribution recommends, but that's how you can bind any process to a single core or package.
I'm building a website using the FuelPHP framework, which by default prints the execution time and memory usage at the bottom of the page. Now I just noticed that on my local machine, I have a pretty small memory footprint:
Page rendered in 0.0304s using 0.721mb of memory.
I deployed exactly this site (including a snapshot of the database) to my remote test server and suddenly memory usage increases by a factor of 10:
Page rendered in 0.0963s using 7.099mb of memory.
I cannot explain such a big difference from the details of execution alone, so I think the deviation must be in the environment.
Unfortunately I'm mainly a programmer, not really a server admin, so I don't really know where to start looking. So I'm going to ask a bit of a general question, which will hopefully give me some useful pointers: where should I start looking? The code is exactly the same and as far as I am aware both machines (local laptop and remote server) are fairly standard Apache installations with PHP5. Any answers suggesting specific Apache or PHP settings that might cause this, or specific lines to search for in the logs, are welcome.
I realise this is a pretty general question that might get me some downvotes, any constructive critisism is welcomed instead. Basically, I'm at a loss where to even start looking, at the moment.
Update: I decided to first exclude the framework as the culprit, so I ran the following one-line script on both machines:
Locally I get a value of about 115, while the remote server reports about 600.
Update 2: Just noticed I'm running PHP5.5 locally but server is only at 5.3. Maybe some bug that got fixed later - will upgrade that first.
Thanks for the suggestions guys, I fixed the problem.
As noted in the update to the original post, I realised I was running PHP5.5 locally but server was only at 5.3. After some ppa-magic with apt and some help from other questions on SO I managed to install Apache 2.4 with PHP 5.5.x. Now I actually see
Page rendered in 0.0261s using 0.582mb of memory.
locally and remotely
Page rendered in 2.3184s using 1.238mb of memory.
Assuming that a factor 2 is caused by the server being 64-bit and not my development machine, I can live with the remaining difference.
Use Xdebug to find out what functions are using all your memory.
Most likely it's a bug or design flaw in the framework.
It can be difficult or even impossible to get Xdebug working depending on your server. But sometimes it's pre-installed and simple.
There is a PHP application right now on a Linux box running under Apache with MySQL. Since we are a windows shop, management wants to get rid of the Linux box and move everything over to windows. Is there a performance difference between the two platforms? Or, is there any significant difference at all, in terms of performance or management?
Microsoft had a team help out optimising PHP for Windows, which work is part of PHP 5.3. Some figures I've seen places the performance close to PHP + Apache on a unix system. Before 5.3 (Which means currently, since 5.3 isn't out yet), performance is bad on Windows. I think there are some patches and tricks you can pull to improve it, but it's going to cost you a bit of performance. That may or may not be a problem; People have a tendency to overestimate performance.
Note that there are other reasons to use unix than just performance. Code may not be portable and even though the core php runs fairly ok, you can well get into trouble with php-extensions and third party libraries. No matter how you look at it, Windows is a second-rate system for running php on.
If your application isn't huge or get hit a couple thousand times per second, there's no difference between the two.
LAMP == WAMP in php small projects. Just install something like XAMPP if you want your environment to be as close as possible to your existing one but in Windows.
Good luck with your project!
You should consider the MS WebPI (download at www.microsoft.com/web ) which would install the entire stack for you to run PHP in IIS7 environment.
the performance is comparable for most apps.
I've just done this for the same reason. Mgt wanted to get rid of the Linux box. I was able to completely move my php application and MySQL database. It took longer for me to configure PHP for IIS than it did for me to move the existing content over.
I have found though that the IIS server is a fair bit slower when it comes to loading pages and images. Where in Linux it appeared instantaneous, in IIS it takes a half second for the page to load and another second for images.