Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 28 days ago.
Improve this question
Good Morning,
we have a Debian server 11 on Google Cloud Platform.
On this Server we have installed only apache2 service but we have about 10 sites installed.
These sites use php 7 and they were created with Symfony 2.8.
The site's databases is hosted on another server, in another nation.
We have noticed that after two days the server has 27GB cache/buffer used.
Can anyone advise us why we have the cache so full?
Could it be the many queries?
Thank You so much
We have enabled OPCACHE and infact it delete oldest session but we have not solved the problem.
Don't worry about that.
Linux uses most of the unused memory for cache and buffers. This is a kind of optimizing. It leaves some memory free to allow fast assigning if a program needs memory.
Current status:
402 MiB used for programs (the most relevant value!)
3 GiB currently unused
27 GiB currently occupied for system optimization (buffers and cache). If free-memory becomes to small, the system will free memory from here.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
There is an application built on Laravel and the application should be ready for a load of 1000 requests per second.
I have done the below tasks:
1- Composer autoload has been dumped
2- Query results are cached
3- All views have been minified
What else should I consider ?
(App runs on docker container)
How are you measuring if you reach the TPS? I would first get a baseline in order to know if your far of and based on that start looking into which part of your application stack (this includes the web and database server and other services used.) Tools that are available to use are JMeter or Apache Bench
In order to reach the 1000 TPS you'll need to tweak the webserver to allows for this type of loads. How to approach this is dependent on the webserver used. So it is difficult to provide you with specifics.
With regards to your DB server there are tools available to benchmark them as well such as pgBadger(postgres) or log files specific for the slow queries.
Ultimately you would also like to be on one of the latests PHP version as they are quite some performance gains in every new version. Currently the latest released PHP version is 7.4
In my opinion these tweaks would have a greater performance gain then tweaking the PHP code (assuming there is no mis-use of php). But this of course depends on the specifics of you application.
Optionally you should also be able to scale vertically (oppose of horizontally) to increase the TPS every time with the number of TPS per application server.
Tips to Improve Laravel Performance
Config caching,
Routes caching.
Remove Unused Service.
Classmap optimization.
Optimizing the composer autoload.
Limit Use Of Plugins.
Here is full detailed article click
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
approximately 1 million woocommerce products.
Wp-rocket already enabled with cloudflare,
With Varnish cache,
and i use 20 active plugins.
Server: intel Xeon E5 6 cores, 24 GB RAM With SSD memory: With CPANEL WHM
Can help me optimize my Server and WP
If you haven't already done so, you might want to use a tool like http://www.webpagetest.org to identify any problem areas.
From the waterfall view, if the time to first byte is much more than 500ms then you want to focus on your server config.
If the delay is from time to first byte to start render then examine the results want to figure out why.
You'll also see what if any improvements can be gained from compressing images and reducing file sizes.
Concatenating files almost always gets good results so make sure you are using the WP Rocket option to concatenate CSS and JS files (Static Files → Combine files)
On large CMS sites had good results installing Google's PageSpeed module.
Here's an overview of the filters available with PageSpeed once it's installed.
One the things I like about PageSpeed is that you have good control over how you configure it, and it's working at the server level, so it will give you options which aren't available at the plugin level.
Good luck!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Ok, please excuse me probably for being thick here or my lack of research on StackOverflow but currently under pressure to get a live server stable so researching on various options.
We noticed on our server (Windows 2008/IIS7) that the tmp folder had like a million session files (Garbage Collection seems not to be working), so could this be causing problems with load time on my PHP sites as for the past 2/3 weeks I have noticed load problems.
My theory is that when PHP creates a session and wants to read it, it has to go thru them million records to find my session.
Any thoughts, happy to be told my theory is totally crap.
UPDATE SO FAR
Clearing over a million session files seems to have reduced pressure on the HDD and PHP sites seems to be more responsive compared to previously. Waiting for the load to increase to see if these session files where causing the initial issue.
Since this is an I/O operation it will slow down when it accesses sessions at tmp directory. Check your php ini setting for http://php.net/manual/en/session.configuration.php#ini.session.gc-maxlifetime
I saw similar issue asked on serverfault check
https://serverfault.com/questions/373024/php-processes-run-one-at-a-time-always-taking-100-of-one-core
php 5.4 provides a sessionhandler class to extend, so that sessions can be stored in a keyvalue extension, database or whatever. for older php versions there are similar functions.
this should be considered if sessions seem to be a bottleneck.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have leased VPS with 2GB mem.
Problem i have is that i have few joomla installations and server get in to very slow response if there is more than 30-50 users attached at same time.
Do you have any tips, books/tutorials/suggestions how to increase response time in this situation?
Pls. give me only very concrete and useful URLs, i would be very grateful.
In attachment i attached just part of htop view on that VPS
The easiest and cheapest thing you can do is to install a bytecode cache, e.g. APC. Thus, php does not need to process every file again and again.
If you're on Debian or Ubuntu this is as easy as apt-get install apc.
I'm going to guess that most of our issues will come from joomla - I'd start by looking through this list: https://stackoverflow.com/search?q=joomla+performance
Other than that, you might want to investigate a php accelerator: http://en.wikipedia.org/wiki/List_of_PHP_accelerators
If you have any custom sql, you might want to check your sql queries are making good use
of indexes
A quick look at your config suggests your using apache pre fork - you might want to try
using threaded worker mode, though always benchmark each config change you make (apache
comes with a benchmarking tool) to ensure any changes have a positive effect.
Some other links..
http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/
Though this is for wordpress, the principals should still apply.
http://blog.mydream.com.hk/howto/linux/performance-tuning-on-apache-php-mysql-wordpress
A couple of things to pay close attention to.
You never want your server to run out of memory. Ensure any apache config limits the
number of children to within your available memory.
Doing SHOW PROCESSLIST on mysql and looking for long running queries can highlight some
easy wins, as nothing kills performance like a slow sql query.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a problem, My site starts lagging at random times, the server shows no signs of CPU load or hard disk swapping during I/O, I just upgraded the server (768 Linode) yet the site lags, anyone in the field have any idea of what could be the cause? I am leaning towards increasing PHP's memory access limits but I don't think it's the cause.
I am using nginx, the access log size is like 600MBs should I disable that?
What else could be the cause? I mean my site is small right now but it's facing terrible slow downs.
Edit: I am running on VPS Linode 768, No memcache only MySQL and PHP, Lagging is like the site does not load, a page takes like 24-50 seconds to load.
Site: http://scapehouse.com
Maybe your scripts uses some API through HTTP - try to add
nameserver 8.8.8.8
nameserver 8.8.4.4
as FIRST lines into /etc/resolve.conf file to reduce time of resolving domains. It was solution for me once (I use Linode too).