I would like to know how could I monitor my web application processes locally? This is because my production site on my current hosting (bluehost), it is showing several processes for my web app and api (around 35 - 40) which cause the application to load infinitely.
When I am running it on my local machine, there's no problem and the application works flawlessly. It is too hard to replicate the scenario since it only occurs on my web hosting. I have several websites hosted there and it seems that only my Laravel application has the issue. My wordpress and Codeigniter apps works smoothly.
I have contacted support already and according to them, it is regarding the code that caused the performance issue but I do not see any mistake in my code since it works fast on my local.
I already used caching for it, routes and database, yet problem still persists.
try running $ php artisan optimize in deployment and ensure that Config::get('app.debug') is not true.
Related
I am developing a PHP laravel application. I noticed that it is too slow and after careful debugging I realized that the processing is not slow, but the problem is that the connection takes too long to be terminated and so in this time it does not accept a new request.
The following figure shows the performance analysis of a request. Regardless of the type of request, it takes 20-30 seconds to close a connection.
The following figure shows the logs of the local run. The first request was accepted and the second only after the first one was closed, which took more than 20 seconds, although the response comes back in a few milliseconds.
Does anyone know how to fix this?
There is not a lot to go on in your question, however one thing we can see from the log messages is that you are using the PHP built-in server (or php artisan serve which uses it). This is well known to be very slow, single-threaded, and can cause PHP applications to stall:
Should I rely on "php artisan serve" for a locally based project?
PHP built-in web server running very slow
Using the PHP built-in server in production
Max concurrent connections for php's built in development server
Presumably this is your local dev environment - the PHP docs warn not to use the built-in server on the public internet:
Warning
This web server is designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
Using nginx or Apache and PHP on your production server you should see much better performance. For local development you might be better off setting up Docker, or WAMP, or XAMPP, or the servers included with your distro if you're on some flavour of *nix.
Does anyone know a solution for deploying a PHP webapp behind a firewall on mainly Windows servers? We have 100+ customers who host our webapp on premise, and we would like to setup a deployer, as a part of our bitbucket pipeline, so our code gets deployed on all installations.
1 customer = 1 installation aka deployment
Today we use a small PHP script, and some version control software, to pull code changes once every day. It runs on both Linux and Windows servers.
Hit me with any solutions :)
You can make use of PHPDeployer.
You can setup SSH-access on the servers and then configure the script to deploy to the desired IP of the server.
I have a production virtual web server that is being migrated to a new virtual web server on the same local network. The problem is that there is a performance problem on the new server.
For example, there is one page that loads in about 1 second on the original server, but takes over 25 seconds to load on the new one. I have already ruled out the database connection as the problem.
Both servers are Ubuntu Apache servers running PHP. There are slight differences in the versions of the servers, I will list as best I can here.
My main question is: is there a general way to profile the web requests on each server?
Similar to the way I can profile a python script or function and get a break-down of which parts of the program take the most time, I would like to profile the web requests on one server compared to the other.
Of course a web requests to the server are fundamentally different than programs run on a local computer, but I need to find where the bottleneck is. Any help is greatly appreciated.
Old Server Config
Ubuntu 14.04 - PHP version 5.5.9
New Server Config
Ubuntu 16.04 - PHP version 5.6.31 (also tested with version 7, same result)
I would suggest to log PHP script execution time.
If it comes from somewhere in the PHP execution, you will notice it easily.
Do a log at the start and one at the end. Then you can stress test both and see different execution time.
I seriously doubt the problem comes from PHP but if you do that you could also see differences with PHP7 which should be 30% faster.
I have a server hosted on DigitalOcean responding within 70-300ms window depending on the request. The same code when hosted on GAE external or local server response time goes up 3 times even when I put dd('test') in index.php file. It's clear GAE PHP interpreter is extremely slow. Did anyone experience this problem and fixed it?
What I found out is it takes longer for GAE to boot laravel or even lumen than it takes my 5$ DigitalOcean server to respond! It's ridiculously slow.
I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.