TTFB too high on one computer, not on another - php

I have a web app for hotel that monitors guest checkin running in PHP/MySQL.
The checkin interface is composed of ajax calls that return data for a checkin to complete. The server where the web app is installed has a TTFB of 3.58 seconds.
What I did is I copied the web files and the whole database for comparison.
I installed xampp on my computer. Tried to run the same process, I got 100ms.
There are not much differences except that their server is faster than my home computer. All files and database is the same. What could be the culprit?
Here's my screenshot of Chrome's debugger tools (Home computer)
And here's their server
I hope I provided all the information you guys need to troubleshoot the issue.
I appreciate all the help I can get. Thank you.

Related

How to deploy a php web app behind firewall?

Does anyone know a solution for deploying a PHP webapp behind a firewall on mainly Windows servers? We have 100+ customers who host our webapp on premise, and we would like to setup a deployer, as a part of our bitbucket pipeline, so our code gets deployed on all installations.
1 customer = 1 installation aka deployment
Today we use a small PHP script, and some version control software, to pull code changes once every day. It runs on both Linux and Windows servers.
Hit me with any solutions :)
You can make use of PHPDeployer.
You can setup SSH-access on the servers and then configure the script to deploy to the desired IP of the server.

Application Pool hang and keeps loading

I have a problem. We have an in house server in our office. It's a super computer server running windows server 2016. Then we developed our own mobile app that's hosted on our in house server and we used PHP and IIS for web services for our mobile app.
Before it was running smoothly and then for some reason sometimes the Application Pool hangs like if I access the url API it's just keeps loading. I don't have any idea what's the problem. I added Worker Process and it works and I thought that's the solution. And then after 2 days we encountered same problem.
So every time it happens were down for 5 to 10 minutes. So I need to recycle or switch the application pool to another application pool so I can stop and start again the application pool that I used for that API.
BTW, the app we developed is for uploading of pictures and sending JSON base64 images with maximum of 8 pictures and we have 500 users.
Do you all guys any idea on how to resolve this problem? I'm struggling right now. Please I need help!

Joomla on Azure Web App - performance issues

I'm trying to get Joomla running on Azure Web Apps but the performance is terrible. 3-5 second response time no matter what operation I'm doing.
Tried the Bitnami VM and it runs much much smoother, so the problem is obviously in the web app. Have tried both Cleardb and the internal mysql preview db and both give the same laggy result. It doesn't matter what sized machines I run on, they all lag.
The only thing I haven't tried is upgrading the Cleardb database from the free tier to the paid ones, simply because my subscription doesn't allow it. Since I see the same performance issue with the internal mysql db this should not matter.
My guess is that this is an issue with PHP and IIS. Anyone with similar experience, and possibly a solution?

Deploy Content to Multiple Servers (EC2)

I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.

Automatically Restarting a chat server application on system restart

I developed a chat application with an attendant chat server. Everything is working fine. The issue now is the fact that whenever the chat server goes down (for instance, the server system shuts down as a result of power failure or some other problem), by the time the server system come back on, the chat server would have to be restarted manually.
I believe (and I know) it is more appropriate for the chat server application to restart itself when the computer comes back on (and of course regardless of who is logged in and of course, even before anyone logs in). I have a batch file that executes the chat server. My attempt was to create a windows service that start automatically and runs this batch file using a Network Service account on the server system. Although, I'm having a hard time with this (temporarily), I would love to ask if there are any alternatives to using a windows service. Suggestions are highly appreciated.
Creating a windows service will be the better solution, but you can add your batch file into the startup folder.
I think you are already having the better solution (Wibndows Service). Along with adding an email alert or some sort of alert when the server restarts will be handy (?).
I would probably just start the server using the Windows Task Scheduler. You can set a task to start on system startup:

Categories