Cache miss under heavy load in memcached - php

I'm using memcached with php 5.2. Last week, we load tested our site and a weird issue happened. I have a particular key which is accessed a number of times (say 10-15) in a request. It always results in a hit under normal site load.
When we increased the load, it suddenly started missing (For an 8 CPU machine, under an average load of around 30). It happens every time the load is increased and stops when load resumes to normal and it happens only for this key.
Has anyone else experienced this issue before? Is there a work around?
Thanks

memcached works 'kinda' like a LRU list, but then kinda not: Checkout memcached for dummies : http://work.tinou.com/2011/04/memcached-for-dummies.html
What strikes me as alarming is how many times you access memcached per request... for the same item? You might want to reduce this "chatter" by "request caching" these look-ups.

Related

Zurmo reports (Export to CSV) painfully slow

So I have a XAMPP setup with Zurmo 2.6.5 running on it. Everything works like a charm. The speed at which it pulls up contacts, goes through pages, etc is considerably fast. I have 2 GB RAM and this is the only web app that runs on it. You can call it dedicated I guess. The problem arises when I attempt to export a fairly decent amount of data to Excel (CSV is the only option available). For e.g, I tried exporting 200-odd rows of data and it timed out due to the max_execution_limit parameter. I increased it first from around 300 to 600, and now finally to 1200. The script keeps running as though there were no end to it :-/.
Surprisingly, when I first apply the filter (not many, just one), it takes around 10-15 seconds to display the first 10 records. That indicates the query executes well within time limits. I have memcached installed, like they suggest to alleviate performance issues.
I checked Zurmo's forums and the net in general, but unfortunately I did not get even a single hit with reference to this issue. Can any fellow Zurmo developer / power user help me get this resolved?
Much appreciated. Thanks.

What can be causing an "exceeded process limit" error?

I launched a website about a week ago and I sent out an email blast to a mailing list telling everyone the website was live. Right after that the website went down and the general error log was flooded with "exceeded process limit" errors. Since then, I've tried to really clean up a lot of the code and minimize database connections. I will still see that error about once a day in the error log. What could be causing this error? I tried to call the web host and they said it had something to do with my code but couldn't point me in any direction as to what was wrong with the code or which page was causing the error. Can anyone give me any more information? Like for instance, what is a process and how many processes should I have?
Wow. Big question.
Obviously, your maxing out your apache child worker processes. To get a rough idea of how many you can create, use top to get the rough memory footprint of one http process. If you are using wordpress or another cms, it could easily be 50-100m each (if you're using the php module for apache). Then, assuming the machine is only used for web serving, take your total memory, subtract a chunk for OS use, then divide that by 100m (in this example). Thats the max worker processes you can have. Set it in your httpd.conf. Once you do this and restart apache, monitor top and make sure you don't start swapping memory. If you do, you have set too high a number of workers.
If there is any other stuff running like mysql servers, make space for that before you compute number of workers you can have. If this number is small, to roughly quote a great man 'you are gonna need a bigger boat'. Just kidding. You might see really high memory usage for a http process like over 100m. You can tweak your the max requests per child lower to shorten the life of a http process. This could help clean up bloated http workers.
Another area to look at is time response time for a request... how long does each request take? For a quick check, use firebug plugin for firefox and look at the 'net' tab to see how long it takes for your initial request to respond back (not images and such). If for some reason request are taking more than 1 or 2 seconds to respond, that's a big problem as you get sort of a log jam. The cause of this could be php code, or mysql queries taking too long to respond. To address this, make sure if you're using wordpress to use some good caching plugin to lower the stress on mysql.
Honestly, though, unless your just not utilizing memory by having too few workers, optimizing your apache isn't something easily addressed in a short post without detail on your server (memory, cpu count, etc..) and your httpd.conf settings.
Note: if you don't have server access you'll have a hard time figuring out memory usage.
The process limit is typically something enforced by shared webhost providers, and generally has to do with the number of processes executing under your account. This will typically equate to the number of connections made to your server at once (assuming one PHP process per each connection).
There are many factors that come into play. You should figure out what that limit is from your hosting provider, and then find a new one that can handle your load.

Selenium : Persistent Browser Session : timeout after some time

In order to do things quickly while developing, I'm keeping the browser window opened on a particular page (which is reached after logging in and doing some other stuff that takes around a minute or two). I'm simply assigning the old session id in test case setup and it is working fine with persistent settings.
However there is one little bit of problem. After every (I think around 30 minutes of inactivity), the browser session times out and running the test case takes 2 minutes. Its not a really big deal but certainly annoying while developing. Is there a way to fix it so that browser default session timeout can be increased.
Im using PHP but it really does not matter if I can find the solution for one language, its easy to figure out for others.

PHP execution time: Factor to consider in determining the speed to execution

As all my requests goes through an index script, I tried to time the respond time of all my requests.
Its simply the difference between the start time (start of the script) and end time (end of the script).
As I cache my data on memcached and user are all served using memcached.
I mostly get less than a second respond time but at times there's wierd spike of more than a seconds. the worse case can go up to 200+ seconds.
I was wondering if mobile users had a slow connection, does that reflect on my respond time?
I am serving primary mobile users.
Thanks!
No, it's the runtime of your script. It does not count the latency to the user, that's something the underlying web server is worrying about. Something in your script just takes very long. I recommend you profile your script to find what that is. Xdebug is a good way to do so.
If you're measuring in PHP (which it sounds like you are), thats the time it takes for the page to be generated on the server side, not the time it takes to be downloaded.
Drop timers in throughout the page, and try and break it down to a section that is causing the huge delay of 200+ seconds.
You could even add a small script that will email you details of how long each section took to load if it doesn't happen often enough to see it yourself.
It could be that the script cannot finish because a client downloads the results very-very slowly. If you don't use a front-end server like nginx, the first thing to do is to try it.
Someone already mentioned xdebug, but normally you would not want to run xdebug in production. I would suggest using xhprof to profile pages on development/staging/production. You can turn on xhprof conditionally, which makes it really easy to run on production.

Wordpress site extremely slow

I have a wordpress blog that is having serious performance issues (like 10s to load each page). I installed WP Super Cache to try to solve the problem, but the first time a user visits the page after the cache expired againg it takes 10s to load. After it is cached, the site speed is normal.
So to try to fix this, I configured the preload mode to run every 30 mins but something is not working, because once the cache expires the first user has to wait 10s for each page...
I configured the cache to last 1 hour (1800s) and the preload to run every 30 mins, this way there should always be a cached version of the page that the users are requesting... but no :(
I would REALLY appreciate a help with this as I dont know what else to do.
Thanks in advance!
Juan
Sometimes plugins can be poorly written and suck resources. Disable every plugin and see if the site runs okay. Then start re-enabling plugins until you find the source of the problem; you should then get rid of the offending plugin and find a replacement.
Install FireBug and use the "Net" tab to see what is taking long to load. It can be anything.. scripts, external scripts, images from external sites, DB connection etc etc.
I dentify the issue then it will be easy for you to solve.
If caching fixes the problem, then your likely culprit is poorly written code (lots of error suppression etc.)
An alternative issue is the server the code is hosted on (not as likely, but a possibility). If the server is having issues, or is running out of memory, it may respond slower in delivering content.
Do what the other say:
Then, also consider adding in multistage caching at different rates. Cache DB at one rate, Cache large page bits at another rate. Cache the whole page at another. That way no person loads it all in one shot. In theory.
The behaviour explained is completely normal.
Cache Misses will be slow. This is expected. Set a cache without and expiry if you want it to hit the cache 100% of the time ( this is far from recommended)
Use an opcode cache if you can. such as APC.

Categories