GAE with Laravel / Lumen extremely slow - php

I have a server hosted on DigitalOcean responding within 70-300ms window depending on the request. The same code when hosted on GAE external or local server response time goes up 3 times even when I put dd('test') in index.php file. It's clear GAE PHP interpreter is extremely slow. Did anyone experience this problem and fixed it?
What I found out is it takes longer for GAE to boot laravel or even lumen than it takes my 5$ DigitalOcean server to respond! It's ridiculously slow.

Related

Laravel connection stays open too long

I am developing a PHP laravel application. I noticed that it is too slow and after careful debugging I realized that the processing is not slow, but the problem is that the connection takes too long to be terminated and so in this time it does not accept a new request.
The following figure shows the performance analysis of a request. Regardless of the type of request, it takes 20-30 seconds to close a connection.
The following figure shows the logs of the local run. The first request was accepted and the second only after the first one was closed, which took more than 20 seconds, although the response comes back in a few milliseconds.
Does anyone know how to fix this?
There is not a lot to go on in your question, however one thing we can see from the log messages is that you are using the PHP built-in server (or php artisan serve which uses it). This is well known to be very slow, single-threaded, and can cause PHP applications to stall:
Should I rely on "php artisan serve" for a locally based project?
PHP built-in web server running very slow
Using the PHP built-in server in production
Max concurrent connections for php's built in development server
Presumably this is your local dev environment - the PHP docs warn not to use the built-in server on the public internet:
Warning
This web server is designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
Using nginx or Apache and PHP on your production server you should see much better performance. For local development you might be better off setting up Docker, or WAMP, or XAMPP, or the servers included with your distro if you're on some flavour of *nix.

How to troubleshoot performance difference between 2 web servers?

I have a production virtual web server that is being migrated to a new virtual web server on the same local network. The problem is that there is a performance problem on the new server.
For example, there is one page that loads in about 1 second on the original server, but takes over 25 seconds to load on the new one. I have already ruled out the database connection as the problem.
Both servers are Ubuntu Apache servers running PHP. There are slight differences in the versions of the servers, I will list as best I can here.
My main question is: is there a general way to profile the web requests on each server?
Similar to the way I can profile a python script or function and get a break-down of which parts of the program take the most time, I would like to profile the web requests on one server compared to the other.
Of course a web requests to the server are fundamentally different than programs run on a local computer, but I need to find where the bottleneck is. Any help is greatly appreciated.
Old Server Config
Ubuntu 14.04 - PHP version 5.5.9
New Server Config
Ubuntu 16.04 - PHP version 5.6.31 (also tested with version 7, same result)
I would suggest to log PHP script execution time.
If it comes from somewhere in the PHP execution, you will notice it easily.
Do a log at the start and one at the end. Then you can stress test both and see different execution time.
I seriously doubt the problem comes from PHP but if you do that you could also see differences with PHP7 which should be 30% faster.

How to monitor PHP Laravel web application processes?

I would like to know how could I monitor my web application processes locally? This is because my production site on my current hosting (bluehost), it is showing several processes for my web app and api (around 35 - 40) which cause the application to load infinitely.
When I am running it on my local machine, there's no problem and the application works flawlessly. It is too hard to replicate the scenario since it only occurs on my web hosting. I have several websites hosted there and it seems that only my Laravel application has the issue. My wordpress and Codeigniter apps works smoothly.
I have contacted support already and according to them, it is regarding the code that caused the performance issue but I do not see any mistake in my code since it works fast on my local.
I already used caching for it, routes and database, yet problem still persists.
try running $ php artisan optimize in deployment and ensure that Config::get('app.debug') is not true.

Remote Redis connection slow

I am experimenting with using Redis for a Drupal website, hosted on Ubuntu 14.04.
I have installed the redis drupal module and am using the Predis library. I have also installed the 'redis-server' Ubuntu package and left the default configuration.
Configuring the Drupal site to use Redis for its cache backend works fine and the pages are lightning fast.
The problem arrives when I tried to spark up an m3.medium AWS instance and hosting the redis server there. The reason behind this is so that we can use one redis server and connect to it from multiple servers (live website hosted on multiple instances behind a load balancer, so each instance should connect to the same redis server).
I have set up the redis server on the instance, modified the redis.conf file to bind the correct IP address so it can be accessed from the outside, opened up the 6379 port, then tried connecting to it from my local computer
redis-cli -h IP
It worked fine so I decided to flip my local site's configuration to point to the new redis server.
The moment I did that the site became painfully slow, and at first I thought it might not even load at all. After almost a minute it finally loaded the home page. Clicking around the site was almost as slow, but the time reduced to maybe 10-15 seconds. That it still unacceptable and doesn't even compare to the lightning fast page load when using the redis server.
My question is: is there some specific configuration I need to do to make the remote connection faster? Is there something preventing it from performing well? some bottleneck somewhere?
Let me know if you want me to add the drupal settings.php configuration, although I am using a pretty standard config.
Although I ran the same configuration for a php application as you are trying, I had no issues hosting redis on either a small or medium instance and handling large amounts of traffic. There must be a config issue somewhere. Another option to debug it would be to try switching to Elasticcache (AWS' redis offering) it requires that all clients be within the same region, but could make finding your problem very easy.

How to debug slow running stock install of Laravel 4 on Cpanel

I've installed an out-of-the-box L4 project on both my local workstation and on my production Cpanel server. It has no database connectivity and I'm just hitting the stock home page with Laravel logo. My local install responds in 80ms or less, however the production server that is far more powerful takes between 2.5 - 8 seconds to respond. It's awful slow.
Debug is False. Every once in a while I'll get a fast response, but I can't make sense of it's randomness, The server is a powerhouse with 8 cores and 16 GB of RAM. It only hosts one other website. I can pull up the robots.txt or phpinfo file instantly, so it's not just a server issue. Here's the staging site that I'm working on: http://staging.dirtondirt.com/
How can I figure out or debug where the slowdown is?
You can use the popular Laravel DebugBar -
https://github.com/barryvdh/laravel-debugbar/tree/1.8
In this, it gives you a 'timeline' of aspects of your application, so you can see what is causing the slow load times.
If that fails - another option is Blackfire.io - which is a new service - I havent tried it yet, but they support Laravel.

Categories