I'm hosting wordpress on a debian machine with nginx and php-fpm. Sometimes, especially on the first request, the performance is not amazing: It tooks up to 2,7 seconds only for generating the html source! But when I refresh the page it will go down to about 700ms sometimes. For me it seems an issue with the theme or a plugin, because on the same server I have a second wordpress installation which is using the same server-side configuration, but load always quite fast (~400s for html generation!).
My suspicion is that the theme or a plugin is doing some slow remote requests, because there are also widgets included who are e.g. loading the amout of likes from a facebook-page, which will even slow down more the generation time. I would like to look for a way to debug the causes of this problem. I'm thinking of a possibility for example to catch all remote requests to functions like file_get_contents, curl and so on.
I sure can disable every single plugin and install another theme to isolate the problem. But as a single plugin can be build on thousands lines of code, it would be cost much time to find the issue. Is there any kind of debugging here which can help to find the problem more quickly? XDebug seems to offer something like this but I never worked with it and currently I don't have really time to get familar with it.
Any external api calls on initial page load will indeed slow rendering time. For social websites, you can use ajax after page load, or even better, query those social websites once a day then store the results in a simple db table, for example wp_social_data, and cache it or index it or store the results a Json file or any other solution that works for you, then render on page load the data you stored instead of making an external http/s call. This will solve the external api calls part.
For nginx and php-fpm, in general : compression should be enabled, caching for static assets or pages, and some realistic values for other settings should be in place, depending on the your application. You can find Wordpress recommendation for nginx server block on here
Related
Trying to get the loading time of a Wordpress website (with three.js) - https://igotchamedia.com/arvr down from 6seconds to under 1.5s - the "Waiting" and "Receiving" part of the page loading is taking the bulk of the time. Caching plugins did not help.
Any help much appreciated!
Your times are slow for the initial Get parameter for the root document of the site, and that's called Time to First Byte. You have a redirect from the non-https site to the https site, and that is part of the slowness issue.
You can get rid of the redirects depending on how you implement SSL on your site and in WordPress: either by a redirect in .htacccess (not the best), or simply being sure your WordPress site and address settings are https and all URLs in the database are https, and then no redirects are needed.
But overall slow TTFB times are a server lag issue. If you are on a shared host, slow TTFB speeds can be slow because of all the other users of the server. Your overall speed - 4 seconds - is not bad for a very image heavy site with a fairly high number of http requests: https://gtmetrix.com/reports/igotchamedia.com/GLQwMRRs
You can talk to the webhost about the TTBF issues. But GoDaddy shared hosting is well-known to be a slow.
If you want to get under a few seconds, don't depend on a caching plugin to do all the work. 1) Get a better server and use a CDN; 2) lower the weight of the images and get the total weight of the site under 1 meg; 3) and use a theme that requires fewer scripts and style sheets which result in a high number of http requests; and 4) keep your external requests, like third-party fonts, to a minimum.
The vast majority of the time a Wordpress or any website is slow; is because of too much rich media (video and images) hosted or used on the site.
Try going back in and optimizing your website's media files for web. Save them at a smaller weight, and use better practices with file extensions. Also, CSS3 techniques are powerful these days and in many cases you don't even need to call so many image files. ie. for backgrounds, gradients, shadows, menus, buttons, etc. If you have lots' and lot's of media being hosted on your site, consider the use of a CDN (content delivery network).
Another good practice. Go in and make sure your website was coded properly. Calling external .js files in the footer, html is semantic, etc etc. Test your Wordpress plugins being used. Make sure your Wordpress is up to date, and has correct memory set. Check your developer console for any JavaScript errors or conflicts bogging the site down. Check your Wordpress database for any corruptions.
Lastly, check your host or server environment. Make sure your using the correct version of PHP, you have enough space, everything is efficient etc.
Also, check out these additional site performance optimization tools.
https://www.pingdom.com/
https://developers.google.com/speed/pagespeed/
Hope this helps, g'luck!
I am currently tasked with finding a solution for a serious PHP bottleneck which is apparently caused by server-side minification of CSS and JS when our sites are under high load.
Some details and what I have found out so far
I inherited a web application running on Wordpress and which uses a complex constellation of Doctrine, Memcached and W3 Total Cache for minification and caching. When under heavy load our application begins to slow down rapidly. So far we have narrowed part of the problem down to the server-side minification process. Preliminary analysis has shown that the number PHP processes start to stack up under load, and when reaching the process limit of 500 processes, start to slow everything down. Something which is also mentioned by the author of the minify library.
Solutions I have evaluated so far
Pre-minification
The most logical solution would be to pre-minify any of the files before going live. Unfortunately our workflow demands that non-developers should be able to edit said files on our production servers (i.e. after the web app has gone live). Therefore I think that pre-processing is out of the question, as it limits the editability of minified files.
Serving unminified files
75% of our users are accessing our web application with their mobile devices, especially smartphones. Unminified JS and CSS amounts to 432KB and is reduced by 60-80% in size when minified. Therefore serving unminified files, while solving the performance and editability problem, is for the sake of mobile users out of the question.
I understand that this is as much a technical problem as it is a workflow problem and I guess we are open to working on both as long as we end up with a better overall performance.
My questions
Is there a reasonable compromise which solves the PHP bottleneck
problem, allows for non-devs to make changes to live CSS/JS and
still serves reasonably sized files to clients.
If there is no such one-size-fits-all solution, what can I do to
better our workflow and / or server-side behaviour?
EDIT: Because there were some questions / comments regarding the server configuration, our servers run Debian and are equipped with 32GB of RAM and 24 core CPUs.
You can run a css/javascript compilation service like Gulp or Grunt via Node.js that minifies all your js and css assets on change.
This service can run in production but that is not recommended without some architectural setup ( having multiple versioned compiled files and auto-checking them via gulp or another extension ).
I emphasize that patching features into production and directly
editing it is strongly discouraged as it can present live issues to
your visitors reducing your credibility.
http://gulpjs.com/
Using Gulp/Grunt would require you to change how you write your css/javascript files.
I would solve this with 2 solutions - first, removing any WP-CRON operation that runs every time a user runs the application and move that to actual CRON on the server. Second I would use load balancing so that a single server is not taking the load of the work. That is your real problem and even if you fix your perceived code issues you are still faced with the load issue.
I don't believe you need to change your workflow at all or go down the road of major modification to your existing system.
The WP-CRON tasks that runs each time a page is loaded causes significant load and slowness. You can shift this from the users visiting running that process to your server just running it at the server level. This reduces load. It is also most likely running these processes that you believe are slowing down the site.
See this guide:
http://www.inmotionhosting.com/support/website/wordpress/disabling-the-wp-cronphp-in-wordpress
Next - load balancing. Having a single server supplying all your users when you have a lot of traffic is a terrible idea. You need to split the webservers load.
I'm not sure where or how you are hosted but I would move things to AWS. Setup my WordPress site for load balancing # AWS: http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
This will involve:
Load Balancer
EC2 Instances running PHP/Apache
RDS for your database storage for all EC2 instances
S3 Storage for the sites media
For user sessions I suggest you just setup stickiness on the load balancer so users are continually served the same node they arrived on.
You can get a detailed guide on how to do this here:
http://www.mornin.org/blog/scalable-wordpress-amazon-web-services/
Or at server fault another approach:
https://serverfault.com/questions/571658/load-balancing-wordpress-on-amazon-web-services-managing-changes
The assumption here is that if you are high traffic you are making revenue from this high traffic so anytime your service responds slowly it will turn away users or possibly discourage them from returning. Changing the software could help - but you're treating the symptom not the illness. The illness is that your server comes under heavy load. This isn't uncommon with WordPress and high traffic, so you need to spread the load instead of try and micro-optimize. The difference is the optimizations will be small gains while the load balancing and spread of load actually solves the problem.
Finally - consider using a CDN for serving all of your media. This loads media faster and it removes load from your system by reducing the amount of requests to the server and it's output to the clients. It also loads pages faster consistently for people wherever they are visiting from by supplying media from nodes closest to them. At AWS this is called CloudFront. WordPress also offers this service free via Jetpack (I believe) but it does not handle all media from my understanding.
I like the idea of using GulpJS. One thing you might consider is to have a wp-cron or even just a system cron that runs every 5 minutes or so and then runs a gulp task to minify and concatenate your css and js files.
Another option that doesn't require scheduling but is based off of watching the file system for changes and then triggering a Gulp build to happen is to use incron (inotify cron). Check out the incron man page. Incron is great in that it triggers actions based on file system events such as file changes. You could use this to trigger a gulp build when any css file changes on the file system.
One caveat is that this is a Linux solution so if you're hosting on Windows you might have to look for something similar.
Edit:
Incron Documentation
I have a dynamic php (Yii framework based) site. User has to login to do anything on the site. I am trying to understand how caching and CDN work; and I am a bit confused.
Caching (memcache):
My site has a good amount of css, js, and images. I've been given to understand that enabling caching ("memcache"?) will GREATLY speed up my site. But this has me confused. How does caching help? I mean, how can you cache something that's coming out of DB for each user separately? For instance, user-1 logs-in, he sees his control panel. User-2 logs-in, user 2 will see their control panel.
How do I determine what to cache? Plus, how do I enable caching (memcaching)?
CDN:
I have been told to use a content delivery network like CloudFlare. It is suppose to automatically cache my site. So, when my user-1 logs in, what will it cache? Will it cache only the homepage CSS, JS, and homepage images? Because everything else requires login? What happens when user logs-out? I mean, do "sessions" interfere with working of a CDN?
Does serving up images via CDN reduce significant load on my server? I don't have much cash for getting a clustered-server configuration. So, I just want my (shared) server to be able to devote all its resources in processing PHP code. So, how much load can I save by using "caching" (something like memcache) and/or "CDN" (something like CloudFlare)?
Finally,
What would be general strategy to implement in this scenario for caching, cdn, and basic performance optimization? do I need to make any changes to my php-code to enable CDN like CloudFlare and to enable/implement/configure caching? What can I do that would take least amount of developer/coding time and will make my site run much much faster?
Oh wait, some of my pages like "about us" page etc. are going to be static html too. But they won't get as many hits. Except for maybe the iFrame page that will be used for my Facebook Page.
I actually work for CloudFlare & thought I would hop in to address some of the concerns.
"do I need to make any changes to my php-code to enable CDN like
CloudFlare and to enable/implement/configure caching? What can I do
that would take least amount of developer/coding time and will make my
site run much much faster?"
No, nothing like a need to re-write urls, etc. We automatically cache static content by file extension. This does require changing your DNS to point to us, however.
Does serving up images via CDN reduce significant load on my server?
Yes, and it should also help most visitors access the site faster and save you a fair amount on bandwidth.
"Oh wait, some of my pages like "about us" page etc. are going to be
static html too."
CloudFlare doesn't cache HTML by default. You use PageRules to setup more advanced caching options for things like static HTML.
Caching helps because instead of performing disk io for each user the data is stored in the memory, ie memcached. This provides a SIGNIFICANT increase in performance.
Memcache is generally used for cacheing data ie query results.
http://pureform.wordpress.com/2008/05/21/using-memcache-with-mysql-and-php/
There are lots of tutorials.
I have only ever used amazon s3 which is is not quite a cdn. It is more of a storage platform but still it helps to take the load off of my own servers when serving media.
I would put all of your static resources on a CDN so your own server would not have to serve these. It would not require any modifcation to your php code. This includes JS, and CSS.
For your static pages (your about page) I'd make sure that php isn't processing that since there is no reason for it. Your web server should serve it directly.
Cacheing will require changes to your code. For cacheing a normal flow is:
1) user makes a request
2) check if data is in cache
3) if it is not in cache do the DB query and put it in cache
4) if it is in cache retrieve it
5) return data.
You can cache anything that requires disk io and you should see a speed up.
Memcached works by storing database information (usually from a remote server or even a database engine on the same server) in a flat file format in the filesystem of the web server. Accessing a flat file directly to retrieve data stored in a regulated format is much much muuuuuch faster than accessing that data from a remote query each time. This is typically useful when you have data that can be safely stored for certain periods of time as it is not subject to regular changes.
The way this works is that if you want to store a user's account information in a cache to speed up loading pages where that user is logged in. You would load the information and cache it locally. On any subsequent requests for that data, it will load in a fraction of the time it normally would take to load that information from the database itself. Obviously you will need to make sure that you update/recache that information if the user changes it while logged in, but you will greatly reduce the time it takes to serve up pages if you implement a caching system that can minimize the time spent waiting on the database.
I'm personally not familiar with CloudFlare so I can't offer any advice to that effect, but in terms of implementing caching in your application, you should check out:
http://code.google.com/p/memcached/wiki/NewOverview
And read the rest of the Wiki entries there which cover installation/implementation/etc. That should get you started on the right track.
My website is loading slowly and I ran this test: http://www.webpagetest.org/result/120227_MD_3CQZM/1/performance_optimization/
Which indicates that files stored on gametrackers.com is not being cached.
Apache and joomla already cache content that is on my server.
I'm using a script from gametrackers.com to show my teamspeak 3 statistics on my website1
However this script sometimes loads slowly duo to issues with gametrackers.com server and that's why I'd like to store a copy of it on my own webserver as cache and refresh it every 30 minutes from the gametrackers website.
If the gametrackers website is down(which is quite common) it should keep the last successful cache check.
How would I do this with apache 2.4.1 and possibly php?
If its possible I'd also like to use css sprites because webpagetest.org indicates:
The following images served from gametracker.com should be combined into as few images as possible using CSS sprites.
http://cache.www.gametracker.com/images/components/html0/gt_icon.gif
http://cache.www.gametracker.com/images/components/html0/online.gif
http://cache.www.gametracker.com/images/flags/nl.gif
http://cache.www.gametracker.com/images/game_icons/ts3.png
http://cache.www.gametracker.com/images/server_info/16x16_channel_green.png
http://cache.www.gametracker.com/images/server_info/16x16_player_off.png
http://cache.www.gametracker.com/images/server_info/vs_tree_item.gif
http://cache.www.gametracker.com/images/server_info/vs_tree_last.gif
http://cache.www.gametracker.com/images/server_info/vs_tree_outer.gif
http://www.gametracker.com/images/game_icons/ts3.png
CSS Sprites are a concept image resource where you use one image with several icons and other items positioned so you can with only one request load several images.
If the images aren´t on your site, it will be very difficult to implement that, and to do so you need strict patterns.
Check: http://coding.smashingmagazine.com/2009/04/27/the-mystery-of-css-sprites-techniques-tools-and-tutorials/
If you have a vps / dedicated server you can use mod_pagespeed it does several combination of things that web site optimizers like, automatically.
But don´t just believe that web site optimizers and testing tools like that are accurate.
They just suggest measures that could help, some practical, some don´t.
Good luck.
I've heard of two caching techniques for the PHP code:
When a PHP script generates output it stores it into local files. When the script is called again it check whether the file with previous output exists and if true returns the content of this file. It's mostly done with playing around the "output buffer". Somthing like this is described in this article.
Using a kind of opcode caching plugin, where the compiled PHP code is stored in memory. The most popular of this one is APC, also eAccelerator.
Now the question is whether it make any sense to use both of the techniques or just use one of them. I think that the first method is a bit complicated and time consuming in the implementation, when the second one seem to be a simple one where you just need to install the module.
I use PHP 5.3 (PHP-FPM) on Ubuntu/Debian.
BTW, are there any other methods to cache PHP code or output, which I didn't mention here? Are they worth considering?
You should always have an opcode cache like APC. Its purpose is to speed up the parsing of your code, and will be bundled into PHP in a future version. For now, it's a simple install on any server and doesn't require you write or change any code.
However, caching opcodes doesn't do anything to speed up the actual execution of your code. Your bottlenecks are usually time spent talking to databases or reading to/from disk. Caching the output of your program avoids unnecessary resource usage and can speed up responses by orders of magnitude.
You can do output caching many different ways at many different places along your stack. The first place you can do it is in your own code, as you suggested, by buffering output, writing it to a file, and reading from that file on subsequent requests.
That still requires executing your PHP code on each request, though. You can cache output at the web server level to skip that as well. Crafting a set of mod_rewrite rules will allow Apache to serve the static files instead of the PHP code when they exist, but you'll have to regenerate the cached versions manually or with a scheduled task, since your PHP code won't be running on each request to do so.
You can also stick a proxy in front of your web server and use that to cache output. Varnish is a popular choice these days and can serve hundreds of times more request per second with caching than Apache running your PHP script on the same server. The cache is created and configured at the proxy level, so when it expires, the request passes through to your script which runs as it normally would to generate the new version of the page.
You know, for me, optcache , filecache .. etc only use for reduce database calls.
They can't speed up your code. However, they improve the page load by using cache to serve your visitors.
With me, APC is good enough for VPS or Dedicated Server when I need to cache widgets, $object to save my mySQL Server.
If I have more than 2 Servers, I like to used Memcache , they are good on using memory to cache. However it is up to you, not everyone like memcached, and not everyone like APC.
For caching whole web page, I ran a lot of wordpress, and I used APC, Memcache, Filecache on some Cache Plugins like W3Total Cache. And I see ( my own exp ): Filecache is good for caching whole website, memory cache is good for caching $object
Filecache will increase your CPU if your hard drive is slow, and Memory cache is terrible if you don't have enough memory on your VPS.
An SSD HDD will be super good speed to read / write file, but Memory is always faster. However, Human can't see what is difference between these speed. You only pick one method base on your project and your server ( RAM, HDD ) or are you on a shared web hosting?
If I am on a shared hosting, without root permission, without php.ini, I like to use phpFastCache, it a simple file cache method with set, get, stats, delete only.
In Addition, I like to use .htaccess to cache static files like images, js, css or by html headers. They will help visitors speed up your page, and save your server bandwidth.
And If you can use .htaccess to redirect to static .html cache if you cache whole page is a great thing.
In future, APC or some Optcache will be bundle into PHP version, but I am sure all the cache can't speed up your code, they use to:
Reduce Database / Query calls.
Improve the speed of page load by use cache to serve.
Save your API Transactions ( like Bing ) or cURL request...
etc...
A lot of times, when it comes to PHP web applications, the database is the bottleneck. As such, one of the best things you can do is to use memcached to cache results in memory. You can also use something like xhprof to profile your code, and really dial in on what's taking the most time.
Yes, those are two different cache-techniques, and you've understood them correctly.
but beware on 1):
1.) Caching script generated output to files or proxies may render problems
if content change rapidly.
2.) x-cache exists too and is easy to install on ubuntu.
regards,
/t
I don't know if this really would work, but I came across a performance problem with a PHP script that I had. I have a plain text file that stores data as a title and a URL tab separated with each record separated by a new line. My script grabs the file at each URL and saves it to its own folder.
Then I have another page that actually displays the local files (in this case, pictures) and I use a preg_replace() to change the output of each line from the remote url to a relative one so that it can be displayed by the server. My tab separated file is now over 1 MB and it takes a few SECONDS to do the preg_replace(), so I decided to look into output caching. I couldn't find anything definitive, so I figured I would try my own hand at it and here's what I came up with:
When I request the page to view stuff locally, I try to read it from a variable in a global scope. If this is empty, it might be that this application hasn't run yet and this global needs populated. If it was empty, read from an output file (plain html file that literally shows everything to output) and save the contents to the global variable and then display the output from the global.
Now, when the script runs to update the tab separated file, it updates the output file and the global variable. This way, the portion of the script that actually does the stuff that runs slowly only runs when the data is being updated.
Now I haven't tried this yet, but theoretically, this should improve my performance a lot, although it does actually still run the script, but the data would never be out of date and I should get a much better load time.
Hope this helps.