I have memcached keys that are as large as 2 MB and I can't reduce it, as the data has been already minified. It does go hot with many web servers accessing that key again and again. Is APC or shared memory a better solution?
Especially over slow network connections memcached could be significantly slower than local caches.
Local caches can be significantly faster because they avoid the network delay and protocol overhead.
APC and shared memory have often quite small size limits so make sure you configure it in a right way.
Another alternative for bigger items might be local files. If they are used quite often the operating system will keep them in memory and there won't be any disk access.
Related
In PHP, shared hosting environment, what shall be an optimal memory consumption to load a page. My current PHP script is consuming 3,183,440 bytes of memory. What shall I consider a good memory usage, to entertain say, 10000 users parallely?
Please be detailed, as I am a novice in optimization part.
Thanks in advance
3MB isn't that bad - keep in mind that parts of PHP are shared, depending on which server is used (IIS, ngx, apache etc.) you can specify pools and clusters as well when having to scale up.
But the old adage testing is knowledge goes well here, try load tests on the site, concurrent 10 -> 100 -> 1000 connections and look at the performance metrics, it wil give you more insight on how much memory is required.
For comparison, the site I normally work on has an average of 300+ users concurrently online and the memory usage is just under 600MB, however I run certain processes locally it will easily use up 16MB.
I'm using my own PHP script with MySQL and when there are many users on the site I can see that CPU Load is somewhat high and RAM usage is low. For example, CPU Usage is about 45% and used RAM is 3GB out of 64GB.
How can I make it so it would use more RAM and less CPU? I'm using MyISAM as MySQL engine, php 7.0. I don't need an answer that explains step by step on how to do this, but I would appreciate any directions because I don't know how to get on with it.
I have a dedicated server using cPanel, WHM, Apache and I have full control over what is on the server.
One good way to use RAM to relieve CPU load is caching.
That is, if your app needs some data results that are pretty computationally expensive to produce, you should pre-compute them and store them in RAM, then the next time your app needs those results, they can be fetched from the cache, probably a lot more cheaply than recomputing them.
Popular tools for this is Memcached or Redis.
Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?
No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)
I would like to know the performances of Memcached on remote server(on same LAN) with Disk Caching.Besides Memcached is a scalable cache solution, would there be any advantage of using Memcached with respect to performance when compared to disk caching.
Regards,
Mugil.
From my personal experience I've found memcached isn't as fast as disk cache. I believe this is because of the OS's disk IO's caching, but memcached allows for a "scalable" cache, meaning if you have more than 1 server accessing the same cache data, it will scale (especially since memcached have a very low CPU overhead compared to PHP). The only way to allow more than 1 machine to access a disk cache at once is network mount which is surely going to kill the speed of the accesses. Also one other thing you have to worry about with file caching is garbage collection to prevent disk from being saturated.
As your site(s) scale, you might want to change your mind later, so whatever choice you make, use a cache wrapper, so you can easily change your method. Zend provides a good API.
Reads are going to be about the same speed (since the OS will cache files frequently accessed)... The difference is going to be with writes. With memcached, all it needs to do is write the ram. But with file storage, it gets a little bit more tricky. If you have write caching enabled, it'll be about as fast. But most servers have it turned off (Unless they have a battery backed cache) for more reliable writes in the case of power failure. So if yours is not using a write cache, it'll require the write to complete on disk (can take 5+ ms on server grade drives, possibly more on desktop grade hardware) before returning. So writing files may be significantly slower than memcached.
But there's another caviat. With files, when you want to write, you'd need to lock the file yourself. This means that if two processes try to write to the same file, the writes would have to complete serially. With memcached, those two writes get pushed into a queue and happen in the order received, but the writing process (PHP) doesn't need to wait for the actual commit...
On PHP.net I am looking at the Memcache::set function and it has this optional flag to use compression...
Use MEMCACHE_COMPRESSED to store the
item compressed (uses zlib).
$memcache_obj->set('var_key',
'some really big variable',
MEMCACHE_COMPRESSED,
50);
I am curious, what would be the benefit of this, just using less space? It seems like this would slow the process down?
Compressing and decompressing on the servers that are running PHP can be quick -- depending on your network and the load on your servers, it can be quicker than transferring more (not compressed) data to and from the memcached servers via the network.
Compressing your data also means using less memory on the memcached servers.
As often, you have to choose between CPU, network, and RAM -- and you have to take into consideration your servers and network, their load, and available resources, to be able to decide.
What will be the "quickiest" solution in your particular situation ? Only you can tell... And the only way to know is probably by testing both solutions in "real conditions".
Also, memcached entries cannot be bigger than 1 MB ; I suppose, in some ways, that compression can help put entries a bit bigger than 1 MB (when not compressed yet) into memcached.
memcached can be used to keep a distributed cache between machines. In that case updates to the cache will need to be sent through the network or other connection between the machines. Compression reduces the amount of data that needs to be sent therefore reduce time needed to transfer and can therefore improve system performance.
even within one host, compressing is orders of magnitude faster than swapping memory pages to/from disks, so if you are swapping due memory shortage, compression will reduce the amount that needs to be swapped, and thereby improve performance. Under the right corner conditions, compression might reduce the size of data enough so that you don't have to swap at all - a big win in performance.
You're right, it's a trade off between taking up less space or eating up more CPU cycles.
You've too choose which resource is more precious to you, RAM or CPU.