Slow initial website load, faster subsequent loads - php

My site www.slople.com is a mix of self-programmed things (the whole slope-finder etc and the mobile-site) and Wordpress/Buddypress-things (Slople Unity, Blog).
I notice that the page takes longer to load on the first request, no matter where it goes (startpage, particular slope, blog) than subsequent pageloads.
The site is running on my "own" VPS with CentOS and Plesk 11. I use APC, nginx, Apache, MySql and PHP 5.1
I suppose some of you could have an idea where I'd have to look into or what could be the cause. I always think it could depend on my DNS (I run my own DNS-service with plesk).
The machine is powerful enough, never uses more than 1/3 of it's RAM. Thanks for any inputs!

Try "Page Speed" tab of FireBug.
As I see your server has no gzip compression for static content. Compression can be switched on in nginx config.
Also, website has a lot of separate PNG, CSS and JS files. I know some CMS templates allows to gather all JS and CSS in single files. PNG files also can be gathered in single file.

Related

Server side minify leads to PHP process bottleneck on high traffic site. What are my options?

I am currently tasked with finding a solution for a serious PHP bottleneck which is apparently caused by server-side minification of CSS and JS when our sites are under high load.
Some details and what I have found out so far
I inherited a web application running on Wordpress and which uses a complex constellation of Doctrine, Memcached and W3 Total Cache for minification and caching. When under heavy load our application begins to slow down rapidly. So far we have narrowed part of the problem down to the server-side minification process. Preliminary analysis has shown that the number PHP processes start to stack up under load, and when reaching the process limit of 500 processes, start to slow everything down. Something which is also mentioned by the author of the minify library.
Solutions I have evaluated so far
Pre-minification
The most logical solution would be to pre-minify any of the files before going live. Unfortunately our workflow demands that non-developers should be able to edit said files on our production servers (i.e. after the web app has gone live). Therefore I think that pre-processing is out of the question, as it limits the editability of minified files.
Serving unminified files
75% of our users are accessing our web application with their mobile devices, especially smartphones. Unminified JS and CSS amounts to 432KB and is reduced by 60-80% in size when minified. Therefore serving unminified files, while solving the performance and editability problem, is for the sake of mobile users out of the question.
I understand that this is as much a technical problem as it is a workflow problem and I guess we are open to working on both as long as we end up with a better overall performance.
My questions
Is there a reasonable compromise which solves the PHP bottleneck
problem, allows for non-devs to make changes to live CSS/JS and
still serves reasonably sized files to clients.
If there is no such one-size-fits-all solution, what can I do to
better our workflow and / or server-side behaviour?
EDIT: Because there were some questions / comments regarding the server configuration, our servers run Debian and are equipped with 32GB of RAM and 24 core CPUs.
You can run a css/javascript compilation service like Gulp or Grunt via Node.js that minifies all your js and css assets on change.
This service can run in production but that is not recommended without some architectural setup ( having multiple versioned compiled files and auto-checking them via gulp or another extension ).
I emphasize that patching features into production and directly
editing it is strongly discouraged as it can present live issues to
your visitors reducing your credibility.
http://gulpjs.com/
Using Gulp/Grunt would require you to change how you write your css/javascript files.
I would solve this with 2 solutions - first, removing any WP-CRON operation that runs every time a user runs the application and move that to actual CRON on the server. Second I would use load balancing so that a single server is not taking the load of the work. That is your real problem and even if you fix your perceived code issues you are still faced with the load issue.
I don't believe you need to change your workflow at all or go down the road of major modification to your existing system.
The WP-CRON tasks that runs each time a page is loaded causes significant load and slowness. You can shift this from the users visiting running that process to your server just running it at the server level. This reduces load. It is also most likely running these processes that you believe are slowing down the site.
See this guide:
http://www.inmotionhosting.com/support/website/wordpress/disabling-the-wp-cronphp-in-wordpress
Next - load balancing. Having a single server supplying all your users when you have a lot of traffic is a terrible idea. You need to split the webservers load.
I'm not sure where or how you are hosted but I would move things to AWS. Setup my WordPress site for load balancing # AWS: http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
This will involve:
Load Balancer
EC2 Instances running PHP/Apache
RDS for your database storage for all EC2 instances
S3 Storage for the sites media
For user sessions I suggest you just setup stickiness on the load balancer so users are continually served the same node they arrived on.
You can get a detailed guide on how to do this here:
http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
Or at server fault another approach:
https://serverfault.com/questions/571658/load-balancing-wordpress-on-amazon-web-services-managing-changes
The assumption here is that if you are high traffic you are making revenue from this high traffic so anytime your service responds slowly it will turn away users or possibly discourage them from returning. Changing the software could help - but you're treating the symptom not the illness. The illness is that your server comes under heavy load. This isn't uncommon with WordPress and high traffic, so you need to spread the load instead of try and micro-optimize. The difference is the optimizations will be small gains while the load balancing and spread of load actually solves the problem.
Finally - consider using a CDN for serving all of your media. This loads media faster and it removes load from your system by reducing the amount of requests to the server and it's output to the clients. It also loads pages faster consistently for people wherever they are visiting from by supplying media from nodes closest to them. At AWS this is called CloudFront. WordPress also offers this service free via Jetpack (I believe) but it does not handle all media from my understanding.
I like the idea of using GulpJS. One thing you might consider is to have a wp-cron or even just a system cron that runs every 5 minutes or so and then runs a gulp task to minify and concatenate your css and js files.
Another option that doesn't require scheduling but is based off of watching the file system for changes and then triggering a Gulp build to happen is to use incron (inotify cron). Check out the incron man page. Incron is great in that it triggers actions based on file system events such as file changes. You could use this to trigger a gulp build when any css file changes on the file system.
One caveat is that this is a Linux solution so if you're hosting on Windows you might have to look for something similar.
Edit:
Incron Documentation

Wordpress on nginx + php-fpm partially slow

I'm hosting wordpress on a debian machine with nginx and php-fpm. Sometimes, especially on the first request, the performance is not amazing: It tooks up to 2,7 seconds only for generating the html source! But when I refresh the page it will go down to about 700ms sometimes. For me it seems an issue with the theme or a plugin, because on the same server I have a second wordpress installation which is using the same server-side configuration, but load always quite fast (~400s for html generation!).
My suspicion is that the theme or a plugin is doing some slow remote requests, because there are also widgets included who are e.g. loading the amout of likes from a facebook-page, which will even slow down more the generation time. I would like to look for a way to debug the causes of this problem. I'm thinking of a possibility for example to catch all remote requests to functions like file_get_contents, curl and so on.
I sure can disable every single plugin and install another theme to isolate the problem. But as a single plugin can be build on thousands lines of code, it would be cost much time to find the issue. Is there any kind of debugging here which can help to find the problem more quickly? XDebug seems to offer something like this but I never worked with it and currently I don't have really time to get familar with it.
Any external api calls on initial page load will indeed slow rendering time. For social websites, you can use ajax after page load, or even better, query those social websites once a day then store the results in a simple db table, for example wp_social_data, and cache it or index it or store the results a Json file or any other solution that works for you, then render on page load the data you stored instead of making an external http/s call. This will solve the external api calls part.
For nginx and php-fpm, in general : compression should be enabled, caching for static assets or pages, and some realistic values for other settings should be in place, depending on the your application. You can find Wordpress recommendation for nginx server block on here

Cache (static) content from other website on my own webserver

My website is loading slowly and I ran this test: http://www.webpagetest.org/result/120227_MD_3CQZM/1/performance_optimization/
Which indicates that files stored on gametrackers.com is not being cached.
Apache and joomla already cache content that is on my server.
I'm using a script from gametrackers.com to show my teamspeak 3 statistics on my website1
However this script sometimes loads slowly duo to issues with gametrackers.com server and that's why I'd like to store a copy of it on my own webserver as cache and refresh it every 30 minutes from the gametrackers website.
If the gametrackers website is down(which is quite common) it should keep the last successful cache check.
How would I do this with apache 2.4.1 and possibly php?
If its possible I'd also like to use css sprites because webpagetest.org indicates:
The following images served from gametracker.com should be combined into as few images as possible using CSS sprites.
http://cache.www.gametracker.com/images/components/html0/gt_icon.gif
http://cache.www.gametracker.com/images/components/html0/online.gif
http://cache.www.gametracker.com/images/flags/nl.gif
http://cache.www.gametracker.com/images/game_icons/ts3.png
http://cache.www.gametracker.com/images/server_info/16x16_channel_green.png
http://cache.www.gametracker.com/images/server_info/16x16_player_off.png
http://cache.www.gametracker.com/images/server_info/vs_tree_item.gif
http://cache.www.gametracker.com/images/server_info/vs_tree_last.gif
http://cache.www.gametracker.com/images/server_info/vs_tree_outer.gif
http://www.gametracker.com/images/game_icons/ts3.png
CSS Sprites are a concept image resource where you use one image with several icons and other items positioned so you can with only one request load several images.
If the images aren´t on your site, it will be very difficult to implement that, and to do so you need strict patterns.
Check: http://coding.smashingmagazine.com/2009/04/27/the-mystery-of-css-sprites-techniques-tools-and-tutorials/
If you have a vps / dedicated server you can use mod_pagespeed it does several combination of things that web site optimizers like, automatically.
But don´t just believe that web site optimizers and testing tools like that are accurate.
They just suggest measures that could help, some practical, some don´t.
Good luck.

Scaling for TYPO3 site

I'm asked by a customer to deliver a TYPO3 based website with the following parameters:
- small amount of content (about 50 pages)
- very little change frequency
- average availabilty about 95%/day
- 20% of pages are restricted, only available after login
- No requirements for fancy typo3 extensions or something else (only Typo3 core)
- Medium sized pages
- Only limited digital assets (images etc.) included
I have the requirements to build an infrastructure to serve up to 1000 concurrent users. With the assumption of having an average think time of 30 sec. this would result in 33 Requests per second.
How could an infrastructure look like?
I know that system scaling is a highly individual task depending on the implementation of the system and needs testing, but I need a first indication where to start (single server, separating components to different servers,...).
Any idea?
Easier solution is EXT:nc_staticfilecache. This saves the static pages as HTML and your web server automatically delivers them through rewrite rules (in case of Apache through mod_rewrite). This works very well for static content and should already enable you to do >100req/s.
The even more fancier way is to use Varnish Cache. Varnish is a reverse proxy server that holds your web site content in memory and can run on a dedicated host. If you configure it correctly (send correct cache headers!), it serves you line speed (some million req/s). There is also a TYPO3 Extension moc_varnish, which e.g. purges the varnish cache, when a page is changed in TYPO3. Also support for edge side includes exists to e.g. only retrieve the user-specific data from TYPO3 and use the static parts of a page from varnish cache (everything except the "Welcome user Foo Bar".. ;)).
As mentioned: Don't forget to configure correct cache headers (Expires etc) for your assets. This already removes some load from your web server.
It's quite possible, already made something like this. You need at least one dedicated server with >= 8GB of RAM.
If we are speaking about infrastructure, the minimal combination is :
nginx/Varnish for front/load balancing
Apache HTTP Server
MySQL could be on standalone server, could be clustered
Performance optimization is very important in such cases.
Some links for further reading :
http://techblog.evo.pl/en/how-to-boost-speed-up-your-typo3-website-with-nginx/
http://www.fabrizio-branca.de/nginx-varnish-apache-magento-typo3.html
http://wiki.typo3.org/Performance_tuning
I'd put this on a single dedicated server (or well specified VPS) but maybe keep all the static assets on a third party CDN so you can focus on the dynamic stuff. I don't know Typo3 but can't see any reason why you couldn't have your db on the same server for this level of usage - there is sure to be caching options of various kinds. Or perhaps consider a cloud server, so if you need more oomph, just add more resources.
Edit: I don't think it is a good idea to build a scalable architecture just yet e.g. proxy servers and all that stuff. If it is slow and you find you really can't cope with one machine, scale up at that point. I'm of the view you can make do with a much simpler architecture given your expected traffic.
I would look into a virtual sserver or a ksm and a good mysql and php configuration. When I have a ksm I would tweak Linux and use iptables for traffic shaping. A dedicated root server would be nice but it's expensive. Then I would think about using a nginx or lighttpd webserver with eaccellerator and memcache. If that doesn't help I would try to compile php and mysql with optimize flags or I would try to compile it with the Intel C Compiler. ICC can optimize C code better then gcc. If the server has many ram I would use ramdisk.

Apache is lagging or something else is bad

I have a website. It's my first website with Zend Framework but I think it's written good. Generatiom time is about 0.9s now. I'll do it something like 0.2 but leave it now. When you press any link on the website it tooks about 1,5-2s before web browser is loading page. Then it tooks 0.15s to show it. So if execution time is 0.9 where are the other 1.1s? Ping is about 13ms. Website address is http://zgarnijlicke.pl
Edit:
Strange. Second domain, http://lottek.eu, is working good. Look at http://lottek.eu/picostreamer. It isn't lagging like the zgarnijlicke.pl domain.
Edit 2:
There is a problem with Zend-Framework. I setted up action without rendering view (layout disabled too) and it's working as fast as server can do it. I'll make new question for it.
Here's a webpagetest.org report for your site: http://www.webpagetest.org/result/100721_1P0Y/
If you view the waterfall graph for the first view, you'll see that the browser gets your HTML source at around the 1.2 second mark, and is first able to render your page at just after 4 seconds. What happens in between those two is downloading of your three javascript files and two CSS files. So, this is where you want to start. Some suggestions:
Consider using a free CDN for jquery.js insteading of serving it from your server, e.g. Google's: http://code.google.com/apis/ajaxlibs/ . This way, users are more likely to already have it cached, Google will serve it from a location geographically closer to the user, and (I think) in compressed format.
For jquery.corner.js and jquery.media.js, consider merging them into one file and serving them compressed (the Apache module mod_deflate makes this very easy to do)
Same for your CSS files - consider merging them into one file and serving them compressed.
Those will give you some quick wins. However there are other things you can improve:
Add width and height attributes to your image tags. Without these, some browsers will halt rendering while they download the images so that they know how much space they'll occupy. None of your image tags have these attributes.
Make sure you're using the right image format for the job. Your banner.png image is over 300k which is far too large. I converted this to a JPEG image (80% quality) and it was 30k.
As for the execution time, 0.9 seconds seems quite high. Are you using APC or similar? Is the page doing any heavy database work?
Try putting some timer code in your php that measures the length of time it takes to generate the page content. This way you can confirm or rule out server problems.
You might also use network tools like ping and traceroute to see if your problem is caused by network latency.
A quick test with wget here gets an overall execution time of 1.5s to transfer one of the pages, with an actual download time of 0.2 seconds, so 1.3s of overhead. The pause occurs before the transfer starts, so that's a server-side problem.
Is that site on a virtual server? It's possible that if the underlying physical server is heavily loaded, your image could be getting swapped out or otherwise CPU-starved and takes ~1 second to become responsive again.
Perhaps it's an internal resource thing - are you connecting to a DB, especially a remote one? Even if some or most of the pages aren't DB-driven, the overhead of connecting to a DB could be causing this slowdown. And then gets swapped/delayed again as there's little further activity to keep the image active.
It could even be something as silly as Apache being configured with 'IdentityCheck' on, though unlikely, as this would slow down all requests. I'm not seeing any slowdown on the requests for .css/.js files from your server when viewed from HTTPFox. Interestingly, requesting the .css/.js via wget returns a '500 Internal Server Error'.
I found it. It's problem with ZF because when I did hello.php page with code like that:
hello world
Without any < ?php ?> script took 0.4s to complete.

Categories