I have a wordpress blog that is having serious performance issues (like 10s to load each page). I installed WP Super Cache to try to solve the problem, but the first time a user visits the page after the cache expired againg it takes 10s to load. After it is cached, the site speed is normal.
So to try to fix this, I configured the preload mode to run every 30 mins but something is not working, because once the cache expires the first user has to wait 10s for each page...
I configured the cache to last 1 hour (1800s) and the preload to run every 30 mins, this way there should always be a cached version of the page that the users are requesting... but no :(
I would REALLY appreciate a help with this as I dont know what else to do.
Thanks in advance!
Juan
Sometimes plugins can be poorly written and suck resources. Disable every plugin and see if the site runs okay. Then start re-enabling plugins until you find the source of the problem; you should then get rid of the offending plugin and find a replacement.
Install FireBug and use the "Net" tab to see what is taking long to load. It can be anything.. scripts, external scripts, images from external sites, DB connection etc etc.
I dentify the issue then it will be easy for you to solve.
If caching fixes the problem, then your likely culprit is poorly written code (lots of error suppression etc.)
An alternative issue is the server the code is hosted on (not as likely, but a possibility). If the server is having issues, or is running out of memory, it may respond slower in delivering content.
Do what the other say:
Then, also consider adding in multistage caching at different rates. Cache DB at one rate, Cache large page bits at another rate. Cache the whole page at another. That way no person loads it all in one shot. In theory.
The behaviour explained is completely normal.
Cache Misses will be slow. This is expected. Set a cache without and expiry if you want it to hit the cache 100% of the time ( this is far from recommended)
Use an opcode cache if you can. such as APC.
Related
Similar Question: Here
Website: Cleanfiles PPD Network
Raw Server Link (Skip DNS): http://173.247.246.58/
Waterfall view (Webpage Test):
I recently moved to a new server. All PHP scripts and resources stayed exactly the same. The new server is an Inmotion Elite Dedicated Server.
Average server load:
Server load 1.25 (8 CPUs)
Memory Used 14.14% (1,137,916 of 8,048,804)
Swap Used 0% (0 of 4,095,992)
As a network owner, having a quick and nifty site is a top priority. I can't afford to have 2-4 seconds of random waiting time for my members when navigating through-out pages. The old server never did this, it loaded fine.
Since the server load appears to be fine and the PHP scripts are the same, I want to assume it is something with some Apache settings or something like that. I really cannot tell. I tried running the two scripts listed in the Top Answer of the question posted above, but both had long wait times...
I talked to the hosting company but they didn't really know what was going on. Any help with this issue or tests that I can do would be greatly appreciated :)
Probably the most effective solution is to use a CDN with native HTML caching capabilities (static and dynamic). TTFB relies on your ability to quickly process the HTML on the origin server, you can skip processing time altogether by serving a fresh cached copy from CDN.
I wrote a post about it recently, which looks into TTFB delay factors and average load time of different resources (based on data gathered across 1B sessions). You may find it useful: http://www.incapsula.com/the-incapsula-blog/item/809-using-cdn-to-improve-seo-and-ttfb
I'm using memcached with php 5.2. Last week, we load tested our site and a weird issue happened. I have a particular key which is accessed a number of times (say 10-15) in a request. It always results in a hit under normal site load.
When we increased the load, it suddenly started missing (For an 8 CPU machine, under an average load of around 30). It happens every time the load is increased and stops when load resumes to normal and it happens only for this key.
Has anyone else experienced this issue before? Is there a work around?
Thanks
memcached works 'kinda' like a LRU list, but then kinda not: Checkout memcached for dummies : http://work.tinou.com/2011/04/memcached-for-dummies.html
What strikes me as alarming is how many times you access memcached per request... for the same item? You might want to reduce this "chatter" by "request caching" these look-ups.
This may seem a silly question, but having been pondering it for a few days, i've only come up with one answer, so I thought i'd throw it open and see what other people think.
I have a PHP script which runs as a CRON job, and checks that our CDN is up and running. If it is, it needs to set a flag to show 'true', if not to show 'false'. The main PHP webpages then need to check this flag everytime a user visits the site, and set the CDN URL variable appropriately.
The query is where do we store this flag? We could put it in a database, but then the database would get 'hit' every page load. Also, as we're using Memcache it could result in a delay to it being updated (although there are ways around that, obviously).
My next thought was to save it into a simple text file on the server. But a) we have load-balanced servers, and although we could run the same CRON job on each server, that seems inefficient, and b) opening and reading a text file could, i thought, slow the page load down, as its involving disk activity everytime - that isn't good when the system is delivering 3 million web pages a week!
The ideal would be PHP system variable that retains its value between pages - but of course, that doesn't exist! Its not the kind of information that should be stored as a cookie on the users machine either.
Does any one have any thoughts? All comments welcome!
If you are already using Memcache, then why do you need the CronJob at all?
Put the check for the CDN into your regular application code. Cache the result in Memcache with a lifetime of what your Cron Interval is right now. If the cache is stale, do the full checking on the CDN, otherwise return the cached result.
If you want to keep the CRON job and are concerned about Disk I/O, consider using a RAM disk or Semaphores or Shared Memory to store the results. But it seems kinda pointless when you are already using memcache.
Do you already use a database connection?
If so, use that; you can fit it into your existing replication mechanism pretty easily, I'm sure.
If not, then whatever you do here is going to add overhead, be it reading a text file on disk or reading from a database where you weren't before.
We are starting to have a big problem with our site. Some of our users are using auto-refreshers and macro programs to take advantage of certain parts of our site, and now it's beginning to take some serious effect. Our site lags most of the day due to this and we need to find out which of our users are performing these tasks so that we can punish them directly. We are using PHP with this project.
I can use any help with this problem. The site lags so badly at times, it's difficult to keep it running.
Parse your web server daemon's access log and calculate the interval between requests for each visitor IP. If they are very regular (i.e. every five seconds +/- 0.25 seconds), flag them.
It is very difficult to stop this type of behaviour if your users are determined, but you can curb the problem. Firstly look for automated user behaviour in your logs, refreshes/actions/requests within a fixed intervals, often these patterns are very obvious because a human could not behave in such a manner, due to speed or activity period.
Use caching or forward proxy like Squid or Varnish.
Caching the costly parts of page generation will make the site run faster. You don't need to display real time information?
Add cache headers (e.g. "Cache-Control: public,max-age=60") and set up a forward proxy like Squid or Varnish. This will make the site run faster most of the time without you having to add caching or optimize your code.
I'd say the real problem is not your users, but your code. The two methods above will help you deal with the situation in the short run. For long-term solution, you should refactor and optimize your code. Reloaders are invisible for properly designed sites. They're so rare that you can't be having many clients.
Here's a simple way to test that your site is up to bar with "reloaders". Open your site in a browser like Firefox. Press and hold F5 for a minute (or less). Release F5; if the site shows up immediately, you've fine; if you need to reboot your server to make it responsive again, you're vulnerable to reload DOS. If you can't handle multiple concurrent requests, your site can be taken down by anyone, not just hardcore users with reload applications.
As all my requests goes through an index script, I tried to time the respond time of all my requests.
Its simply the difference between the start time (start of the script) and end time (end of the script).
As I cache my data on memcached and user are all served using memcached.
I mostly get less than a second respond time but at times there's wierd spike of more than a seconds. the worse case can go up to 200+ seconds.
I was wondering if mobile users had a slow connection, does that reflect on my respond time?
I am serving primary mobile users.
Thanks!
No, it's the runtime of your script. It does not count the latency to the user, that's something the underlying web server is worrying about. Something in your script just takes very long. I recommend you profile your script to find what that is. Xdebug is a good way to do so.
If you're measuring in PHP (which it sounds like you are), thats the time it takes for the page to be generated on the server side, not the time it takes to be downloaded.
Drop timers in throughout the page, and try and break it down to a section that is causing the huge delay of 200+ seconds.
You could even add a small script that will email you details of how long each section took to load if it doesn't happen often enough to see it yourself.
It could be that the script cannot finish because a client downloads the results very-very slowly. If you don't use a front-end server like nginx, the first thing to do is to try it.
Someone already mentioned xdebug, but normally you would not want to run xdebug in production. I would suggest using xhprof to profile pages on development/staging/production. You can turn on xhprof conditionally, which makes it really easy to run on production.