Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have leased VPS with 2GB mem.
Problem i have is that i have few joomla installations and server get in to very slow response if there is more than 30-50 users attached at same time.
Do you have any tips, books/tutorials/suggestions how to increase response time in this situation?
Pls. give me only very concrete and useful URLs, i would be very grateful.
In attachment i attached just part of htop view on that VPS
The easiest and cheapest thing you can do is to install a bytecode cache, e.g. APC. Thus, php does not need to process every file again and again.
If you're on Debian or Ubuntu this is as easy as apt-get install apc.
I'm going to guess that most of our issues will come from joomla - I'd start by looking through this list: https://stackoverflow.com/search?q=joomla+performance
Other than that, you might want to investigate a php accelerator: http://en.wikipedia.org/wiki/List_of_PHP_accelerators
If you have any custom sql, you might want to check your sql queries are making good use
of indexes
A quick look at your config suggests your using apache pre fork - you might want to try
using threaded worker mode, though always benchmark each config change you make (apache
comes with a benchmarking tool) to ensure any changes have a positive effect.
Some other links..
http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/
Though this is for wordpress, the principals should still apply.
http://blog.mydream.com.hk/howto/linux/performance-tuning-on-apache-php-mysql-wordpress
A couple of things to pay close attention to.
You never want your server to run out of memory. Ensure any apache config limits the
number of children to within your available memory.
Doing SHOW PROCESSLIST on mysql and looking for long running queries can highlight some
easy wins, as nothing kills performance like a slow sql query.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
There is an application built on Laravel and the application should be ready for a load of 1000 requests per second.
I have done the below tasks:
1- Composer autoload has been dumped
2- Query results are cached
3- All views have been minified
What else should I consider ?
(App runs on docker container)
How are you measuring if you reach the TPS? I would first get a baseline in order to know if your far of and based on that start looking into which part of your application stack (this includes the web and database server and other services used.) Tools that are available to use are JMeter or Apache Bench
In order to reach the 1000 TPS you'll need to tweak the webserver to allows for this type of loads. How to approach this is dependent on the webserver used. So it is difficult to provide you with specifics.
With regards to your DB server there are tools available to benchmark them as well such as pgBadger(postgres) or log files specific for the slow queries.
Ultimately you would also like to be on one of the latests PHP version as they are quite some performance gains in every new version. Currently the latest released PHP version is 7.4
In my opinion these tweaks would have a greater performance gain then tweaking the PHP code (assuming there is no mis-use of php). But this of course depends on the specifics of you application.
Optionally you should also be able to scale vertically (oppose of horizontally) to increase the TPS every time with the number of TPS per application server.
Tips to Improve Laravel Performance
Config caching,
Routes caching.
Remove Unused Service.
Classmap optimization.
Optimizing the composer autoload.
Limit Use Of Plugins.
Here is full detailed article click
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
How can I reduce the wait time? sometimes it's over 5-6 seconds, please see here: https://tools.pingdom.com/#59ebd2b2c6000000
I am using AppServ v2.7
please help, i am already using gzip compression, and plugins to improve my page speed but its taking longer than 9s to load
thank you
This might be a complex of activities, I ended up in those for my wordpress blogs:
-moving from hosting to dedicated
-switching from apache to nginx
-using CDN
-using minifiers
-using a nginx frontend config to connect to varnish which connects to nginx backend config
-switching from php5.6 to php7.2
-fine tuning php and nginx and varnish configs
-enabling gzip
-using browser cache
can you describe more what your is in between your app and the served request? i.e are you using an app based page cache like WP total cache?
are you using a reverse proxy cache like fastly or varnish? Are your assets being served from a CDN like S3? do you have an opcode cache on the server?
tending to all of these issues can be time consuming.
One faster 'trick' can be to move your nameservers to cloudflare DNS, and use their free caching layer to see if what kind of easy performance gains you get.
Another good diagnose method can be combo of google page speed insights along with chrome dev tools, network tab, where you can see timing request graph and potential bottlenecks.
maybe there is something obvious like trying to serve uncompressed 4MB images.
note that none of these will necessarily address your root issue. WP is actually not that slow in vanilla format until we start adding various plugins to it.
the final solution would be doing a deep dive into your code using callgraphs and tools like webgrind to analyze, WP dev and theme are very prone to inefficient DB queries, and multiple redudnant 'loop' instantiations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was wondering if someone could give a high-level answer about how to track functions which are causing a slow-down.
We have a site with 6 thousand lines of code and at times there's a significant slowdown.
I was wondering what would be the best way to track the source of these occasional slowdowns? Should we attach a time execution tracker on each function or would you recommend something else?
It's a standard LAMP stack setup with PHP 5.2.9 (no frameworks).
The only way to properly track down a why and where a script is slowing down, is by the use of a profiler.
There are a few of these available for PHP. Some of which requires that you install a module on the server, some which uses a PHP-only library, and others again which are stand alone.
My preferred profiler is Zend Studio, mainly because I use it as my IDE. It has the benefit of being both stand-alone, and to be used in the conjunction with server-side modules (or the Zend Server package). Allowing you to profile both locally, and on production systems.
One of the easiest things to look for, however, are SELECT queries inside loops. They are notorious for causing slow-downs, especially when you have a more than a few hundred records in the table being queried.
Another if is you have multiple AJAX calls in rapid succession, and you're using the default PHP session handler (flat files). This can cause the loading time to increase significantly because the IO-operations are locking. This means that it can only handle one request that uses session at a time, even though AJAX is by its very nature asynchronous.
The best way to combat this, is to use/write a custom session handler that utilizes a database to store the sessions. Just make sure you don't saturate the DB connection limit.
First and foremost though: Get yourself a proper profiler. ;)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to get the status of services across a large number of servers in order to calculate uptime percentages. I may need to use multiple servers to do the checking. Does anyone know of a reliable way to queue them to be checked at a specific time/interval?
I'm writing the application in PHP, but I'm open to using other languages and tools for this. My only requirement is that it must run on Linux.
I've looked into things like Gearman for job queuing, but I haven't found anything that would work well.
Inorder to get uptime percentages of your services you can execute commands to check status of services and log them for further analysis/calculations. Following are some of the ways of doing same:
System commands like top, free -m, vmstat, iostat, iotop, sar, netstat etc. Nothing comes near these linux utility when you are analysing/debugging a problem. These commands give you a clear picture of what is going inside your server
SeaLion: Agent executes all the commands mentioned in #1 and custom commands as well. Outputs of these commands can be accessed in a beautiful web interface. This tool comes handy when you are working across hundreds of servers as installation is clear simple. And its FREE
Nagios: It is the mother of all monitoring/alerting tools. It is very much customizable but very difficult to setup for beginners. Although there are some nagios plugins.
Munin
Server density: A cloudbased paid service that collects important Linux metrics and gives users ability to write own plugins.
New Relic: Another well known hosted monitoring service.
Zabbix
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Okay so I've running so pretty large queries on my site and its been running up the mysql resources. My admin questioned whether I've tried different php accelerators but I've never installed one before. So I did some research on it, and I'm curious do I need to make any modifications to my actual php codes or do I just install an accelerator and let it take effect? I need ways to optimize my load and reduce the amount of resources being used on the server.
"PHP accelerators" are opcode caches; they save the server from having to re-interpret PHP files on every request. The savings is somewhere in the realm of 1% of CPU load, and it won't help you one bit if your problem is with the database's resource usage.
Most PHP accelerators work by caching the compiled bytecode of PHP
scripts to avoid the overhead of parsing and compiling source code on
each request (some or all of which may never even be executed). To
further improve performance, the cached code is stored in shared
memory and directly executed from there, minimizing the amount of slow
disk reads and memory copying at runtime.
Source: http://en.wikipedia.org/wiki/PHP_accelerator
Sounds to me like you need to accelerate your SQL queries, not your PHP code.
Here are a list of PHP accelerators that you can evaluate and install
http://en.wikipedia.org/wiki/List_of_PHP_accelerators
I've used APC, which I believe is one of the most popular PHP accelerators. One thing that it does is basically cache function calls and arguments, so that subsequent calls to the same function with the same arguments will have its return value cached, and not have to recompute everything.