I'm working on a wordpress website:
- it's hosted on a VMWARE Linux Virtual Server with 2 core and 4GB RAM.
- it's the only website (development server) so no others website access.
- has Apache Module mod_deflate on text, html, javascript, css, xml
- it runs a lot of javascript stuff and the total size of the page is about 1,6 MB
- average cpu load is very low (0% to 5%)
- the server has 1GB RAM Free
- my ISP verified SAN access statistics and latency times are very low (some ms)
This is a load time test on Pingdom Website Speed Test:
http://tools.pingdom.com/fpt/#!/dMWeVi/http://www.watcheswholesale.eu/
it shows 3,9 seconds of wait time.
Is there a "check list" to understand why the server lose these seconds before sending content to the browser?
Thanks
I did a profiling an a wordpress installation once, being embarrassed by such a loading time.
It turned out that the time can be reduced by half with some opcode cache like APC, and another half been taken by parsing an enormous .po localization file. Did a quick patch to cache it in a php array and finally got loading time within a second (which is still too much but barely bearable).
Now I am thinking that removing useless languages from that gettext file would also help.
The profiling itself was as silly as adding microtime(1)-based labels all ove the code
Your fundamental problem is the site is taking too long to generate the page - I'd start by looking at how many DB calls are being made and how long they are taking - the query logs can help you with this.
You also need to turn on keep-alive so that you are reusing TCP connections but that'll only make a little bit of difference.
Related
I am trying to identify why my Sugar CRM sites are loading so slowly. I am hosting 22 sites on IIS, PHP version is 5.3.26, and my databases are on a seperate SQL Server 2008. The Web Server is running Windows 2008, IIS7, 10GB memory and has a Intel® Xeon® Processor E7-2870.
After profiling one of my databaes I have ruled out the issue to be data related as queries were consistently running in less than 1 second.
My hosting provider has confirmed that we have 100mb dedicated line to our web server. After running a speed test, I get around 70 mb down and 40 mb up, so I do not think this is a bandwidth issue.
I took a look at an offical 'Performance Tweaks' article, and made changes to the config_override.php as suggested, however this did not make a signifcant difference.
http://support.sugarcrm.com/04_Find_Answers/02KB/02Administration/100Troubleshooting/Common_Performance_Tweaks/
Something I have noticed is that there is an awful lot of PHP-CGI.EXE proccesses. As I look at the server now the average CPU consumption for one of these instances is 11%. I am not sure if this is something to be concerned about? The CPU Usage in Windows Task Manager looks very unstable.
To see if it was genernal PHP issue, I added a simple PHP script containing "echo (5 % 3)."\n";" - which was returned instantly.
To summarise web pages are taking on average 5 seconds to load and users are reporting the system as usable but slugglish.
Does anyone have any suggestions of how I might be able to speed up my application?
Thanks
What does the page load time show at the bottom of the page in SugarCRM? If it shows something like 0-2s, but the page in reality takes much longer then look at adding Opcode caching such as APC or Memcache.
Let me show what problem I'm dealing with:
website powered by Apache 2.2 + PHP 5.x + MySQL 5.1.x
peak traffic = 2.000 unique visitors/min = 5-8k pageviews/min
normal traffic = 2.000 unique visitors/day
website works well while under normal traffic
website lags while under peak traffic
my server cpu load is pretty big while under peak traffic (because of mysql/php processes), so my website is lagging.
Normal state: server response in 0.1-0.4 sec/pageview. PHP code is optimized to get and process all data from database and output HTML code within this time (call it server-response).
Peak traffic state: server response in 2-5 sec/pageview. And that's a bit longer response than I'm happy with. I don't want my visitors to wait so long for requested page.
What I'm doing now: My way to deal with this problem now is local cache system. I'm making local cache file (stored on disk) for about 10 minutes with cached SQL results - so I don't have to call the same sql query with every request.
My website is http://www.lechaton.cz/.
Is there any better way how to deal with peak traffic or optimize CPU utilization?
Thanks all for your time and advise!
I've been working through whole weekend and testing and testing and comparing methods and solutions.
Nginx solution (replace LAMP with LNMP)
I'm still using Apache2.2, but even using nginx (thanks to Bondye) there is not a big difference.
I've tried LNMP on debian wheezy, but with not a big difference from LAMP.
For static files nginx is faster in fact.
Nginx with PHP-FPM is a twice faster, that could be solution for some cases, but not solving my issues for 100%.
Tune-up your MySQL settings and MySQL queries
Tune-up your MySQL server with better caching and buffering. Also check your max-connections and memory usage.
But the most important is to optimize your queries for best performance. Even if your queries are best performing with mysql-cache, bigger traffic brings your server down within several minutes of big traffic. YOU HAVE TO CACHE your output!!
Tune-up apache
tune up your apache mpm-prefork
Reduce KeepAliveTimeout to max 5 seconds (default=30)
keep maxClients set to correct number (depends on your RAM, max-processes and max-servers in settings directive)
Final conclusion
1) content cache = rule #1
I've found the best solution is: cache as much as much content as you can. There is no reason why every user should generate same output if there is option to display cached content. It's much faster and it saves your resources.
2) nginx for static content
You can use nginx to perform best with static files and content, there is much lower cpu load with multiple processes.
With PHP-FPM your code speeds up twice (maybe a bit more). But I can't consider it as final solution.
3) test your website with benchmark tools I've used siege and apache benchmark (ab) and mysqlslap.
These steps helped me to reduce CPU load with my brand new server, speed up my server-response and balance my peak-traffic during big events.
Hope someone will find it helpful.
My Drupal 6 site has been running smoothly for years but recently has experienced intermittent periods of extreme slowness (10-60 sec page loads). Several hours of slowness followed by hours of normal (4-6 sec) page loads. The page always loads with no error, just sometimes takes forever.
My setup:
Windows Server 2003
Apache/2.2.15 (Win32) Jrun/4.0
PHP 5
MySql 5.1
Drupal 6
ColdFusion 9
Vmware virtual environment
DMZ behind a corporate firewall
Traffic: 1-3 hits/sec peak
Troubleshooting
No applicable errors in apache error log
No errors in drupal event log
Drupal devel module shows 242 queries in 366.23 milliseconds,page execution time 2069.62 ms. (So it looks like queries and php scripts are not the problem)
NO unusually high CPU, memory, or disk IO
Cold fusion apps, and other static pages outside of drupal also load slow
webpagetest.org test shows very high time-to-first-byte
The problem seems to be with Apache responding to requests, but previously I've only seen this behavior under 100% cpu load. Judging solely by resource monitoring, it looks as though very little is going on.
Here is the kicker - roughly half of the site's access comes from our LAN, but if I disable the firewall rule and block access from outside of our network, internal (LAN) access (1000+ devices) is speedy. But as soon as outside access is restored the site is crippled.
Apache config? Crawlers/bots? Attackers? I'm at the end of my rope, where should I be looking to determine where the problem lies?
------Edit:-----
Attached is a waterfall chart from webpagetest.org showing a 15 second load time. I've seen times as high as several minutes. And again, the server runs fine much of the time. The green areas indicate that the browser has sent a request and is waiting to recieve the first byte of data back from the server. This is certainly a back-end delay, but it is puzzling that the CPU is barely used during this slowness.
(Not enough rep to post an image, see https://webmasters.stackexchange.com/questions/54658/apache-very-high-page-load-time
------Edit------
On the Apache side of things - Is this possibly a ThreadsPerChild issue?
After much research, I may have found the solution. If I'm correct, it was an apache config problem. Specifically, the "ThreadsPerChild" directive. See... http://httpd.apache.org/docs/2.2/platform/windows.html
Because Apache for Windows is multithreaded, it does not use a
separate process for each request, as Apache can on Unix. Instead
there are usually only two Apache processes running: a parent process,
and a child which handles the requests. Within the child process each
request is handled by a separate thread.
ThreadsPerChild: This directive is new. It tells the server how many
threads it should use. This is the maximum number of connections the
server can handle at once, so be sure to set this number high enough
for your site if you get a lot of hits. The recommended default is
ThreadsPerChild 150, but this must be adjusted to reflect the greatest
anticipated number of simultaneous connections to accept.
Turns out, this directive was not set at all in my config and thus defaulted to 64. I confirmed this by viewing the number of threads for the second httpd.exe process in task manager. When the server was hitting more than 64 connections, the excess requests were simply having to wait for a thread to open up. I added ThreadsPerChild 150 in my httpd.conf.
Additionally, I enabled the apache status module
http://httpd.apache.org/docs/2.2/mod/mod_status.html
...which, among other things, allows one to see the total number of active request on the server at any given moment. Right away, I could see spikes of up to 80 active request. Time will tell, but I'm confident that this will resolve my issue. So far, 30 hours without a hiccup.
Apache is too bulk and clumsy for "1-3 hits/sec avg".
Once I have similar problem with much lighter (almost static-html, no DB) site, and similar hits/second.
No errors, no high network/CPU/memory/disk loads. Apache on WinXP.
I inserted nginx before Apache for static files and it started working like a charm.
Caching. The solution it caching.
Drupal (in common with most other large CMS platforms) has a tendency toward this kind of thing due to its nature -- every page is built on the fly, constructed from a whole stack of database tables and code modules. The more you've got in there, the slower it will be, but even fairly simple pages can become horribly slow if your site gets a bit of traffic.
Drupal has a page cache mechanism built-in which will cut your load dramatically. As long as your pages are static (ie no dynamic content) then you can simply switch on caching and watch the performance go right back up.
If you have dynamic content, you can still enable caching for the static parts of the page. It is a bit more complex (and beyond the scope of this answer), but it is worth the effort.
If that's still not enough, a server-based caching solution such as Varnish will definitely help.
I am having a bit of an issue with a php scropt. When I open my site (hosted locally) it pauses for 1-2 seconds then it loads the page.
the database where I am readying data from is very small and has indexes. The queries are quick.
My PHP code is somewhat optimized, and my databases are indexed.
PHP5.3.19 is installed on Windows 2008 R2 Server (Intel Xeon(R) CPU E5-2400 0 #2.20 GHz (2 processors) 16GB of RAM and MySQL Server in installed on a different server. Both servers are on the same network so all connection should be internal.
I also use PDO to connect to my databases.
How can I determine what is causing the extra delay?
What things can I check for to expedite the page load?
Thanks
According to my Experience I can say, there might JavaScript or any script files which you might have called in your code.
if browser find any missing script(due to wrong path or what so ever reason it may be) file then it search for it again and again until the searching time is out, which is around 2 to 5 sec, depending on the setting of the browser.
Similar Question: Here
Website: Cleanfiles PPD Network
Raw Server Link (Skip DNS): http://173.247.246.58/
Waterfall view (Webpage Test):
I recently moved to a new server. All PHP scripts and resources stayed exactly the same. The new server is an Inmotion Elite Dedicated Server.
Average server load:
Server load 1.25 (8 CPUs)
Memory Used 14.14% (1,137,916 of 8,048,804)
Swap Used 0% (0 of 4,095,992)
As a network owner, having a quick and nifty site is a top priority. I can't afford to have 2-4 seconds of random waiting time for my members when navigating through-out pages. The old server never did this, it loaded fine.
Since the server load appears to be fine and the PHP scripts are the same, I want to assume it is something with some Apache settings or something like that. I really cannot tell. I tried running the two scripts listed in the Top Answer of the question posted above, but both had long wait times...
I talked to the hosting company but they didn't really know what was going on. Any help with this issue or tests that I can do would be greatly appreciated :)
Probably the most effective solution is to use a CDN with native HTML caching capabilities (static and dynamic). TTFB relies on your ability to quickly process the HTML on the origin server, you can skip processing time altogether by serving a fresh cached copy from CDN.
I wrote a post about it recently, which looks into TTFB delay factors and average load time of different resources (based on data gathered across 1B sessions). You may find it useful: http://www.incapsula.com/the-incapsula-blog/item/809-using-cdn-to-improve-seo-and-ttfb