Put Magento's var directory in RAM - php

I need to speed up my magento installation, so I'm planning to put the content of 'var/' (or only var/cache and var/sessions) on a tmpfs.
I'm also buying a reserved instance on Amazon, so I would like to keep a sufficent amount of RAM. I want to enable memcached, PHP Apc, MySQL caching and HTTP caching.
I'm thinking of a Medium Reserved Instance with the following specs:
3.75 GB memory
2 EC2 Compute Unit (1 virtual core with 2 EC2 Compute Unit)
410 GB instance storage
32-bit or 64-bit platform
I/O Performance: Moderate
EBS-Optimized Available: No
API name: m1.medium
Will the RAM be enough to appy a good caching system?
Looking now (after 3 months) the var directory is 14gb, but I think cleaning it up each 5/7 days would be good too.
Do you have any suggestion for me?
P.S. the store will contain an average of 100/150 products.

I think moving /var to a tmpfs is probably not your biggest bottleneck and would likely be more trouble than its worth. Make sure Magento caching is enabled and you have APC enabled.
This post covers some general tips on increasing Magento performance:
Why is Magento so slow?

I would suggest looking into setting up a reverse proxy like Varnish.
Getting Varnish To Work on Magento
If you do plan on just using a tmpfs in memory I would suggest looking into Colin's improved over Zend_Cache_Backend_File
https://github.com/colinmollenhour/Cm_Cache_Backend_File
Also I would suggest looking into mytop to keep tabs of if you have any places you can optimize queries in the application itself or in my.cnf to help ease any DB bottlenecks.
http://jeremy.zawodny.com/mysql/mytop/
Session Digital has a good white paper (although somewhat dated) on optimizing Magento enterprise and the same can be applied to Community. Out of everything I've tried, Varnish, as mentioned in the White paper offered the most significant increase in response time.
http://www.sessiondigital.com/resources/tech/060611/Mag-Perf-WP-final.pdf
Hope this helps!

Firstly, +1 to all of the answers here.
if you're thinking about running /var/ out of tmpfs it's probably because you've heard of the lousy file IO on AWS or you have experienced issues with it yourself. However, the /var/ directory is the least of your concern - Zend / Magento's autoloaders are more taxing to IO. To mitigate that you want to run APC and the compilation (assuming you're not using persistent shopping cart).
As echoed by other commenters, anything that runs from cache or memory will circumvent PHP and thus the need to touch the disk and incur IO issues. Varnish is a bit of a brute-force approach and is a wonderful tool on massive sites that scale to millions of views; but I believe that Varnish's limitations with SSL and the lack of real documentation and support from our Magento community make it a better intellectual choice than an actual alternative.
When running Magento Community I prefer to run Tinybrick's Lightspeed on AWS on a Medium instance - which gives me the most bang-for-buck and is itself a full-page-cache. I get 200+ concurrent pages/second in this setup and I'm not running memcached or using compilation.
http://www.tinybrick.com/improve-magentos-slow-performance.html/
Be careful with running memcached in your AWS instance as well - I find that it can be impeded by a power-hungry Apache gone wild in the rare instance you haven't got a primed cache which causes Apache maxclients issues while it waits for cache response. If you could afford it I would rather run two micro Apache instances with a shared memcached session store and a load balancer in front of them - give some horsepower to the db on a separate box for them to share, though. But all setups are unique and your traffic/usage will dictate what you need.
I have run Magento in the AWS cloud for 3 years with great success - and I wish the same to you. Cheers.

Related

How can I prevent memory exhaustion using a CakePHP application?

I am on a VPS server (Quad core with 768MB of RAM. I am running Apache and using APC as my default caching engine) . I get around 2000 visitors per day with a maximum of 100 concurrent visitors.
According to CakePHP DebugKit, my most memory intensive pages use around 16MB and the other pages average at around 10MB.
Is this memory usage normal?
Is it normal for my memory to bottleneck at 2000 visitors per page?
Should I consider upgrading my plan to 2GB RAM?
I also noticed that the View rendering is taking up most of the memory, around 70% on most pages.
I am monitoring my resource usage, when I have around 50 or more concurrent users, I am getting 0 free MB left.
Thank you
You should also check your other processes. From my experience, MySQL takes up more memory than anything else on any stack that I run. You should also implement better page caching so that PHP doesn't need to be touched when it isn't absolutely necessary. But Apache is also a memory hog that needs to be fine tuned. If you want to stick with Apache, then run Varnish in front of it.
Also, keep APC, but also add Memcached. It's much faster.
If your site has a spike-load that brings it to zero memory, then, if you can, consider launching extra instances of the server and doing a sort of round-robin (if the VPS is cloud-hosted and this is possible). If the load is constant, then definitely upgrade.
#burzum is completely right, however, that you should just switch to nginx. It's far, far better than Apache at this point. But, just to put you on the right track, quite a few people run nginx as a reverse proxy in front of Apache, and while that does speed up the server, it's entirely unnecessary because nginx can do pretty much anything you need it to do without Apache. Don't bother using Varnish in front of nginx either because nginx can act as its own reverse proxy.
Your best bet is to implement nginx with apcu (upgrade php to 5.5 if possible for better performance) and use memcached, and implement nginx's native microcaching abilities. If your site is heavy on read and light on write, then you might notice that everything is taken care of by nginx just retrieving a cached copy from memcache. While this helps with memory, it also helps with processing. My servers' CPUs usually have a 3-5% usage during peaks.
Is this memory usage normal?
Yes, it doesn't seem to be high for a normal CakePHP application.
Is it normal for my memory to bottleneck at 2000 visitors per page?
I guess yes, I'm not a "server guy".
Should I consider upgrading my plan to 2GB RAM?
I would try switching to Nginx first. Apache is a memory eating monster compared to Nginx, just search for a few banchmarks, a random one I've picked by a quick search is from this page.
Overall Nginx should provide you a much better performance. At some stage I would consider updating the memory but try Nginx first.

How to optimize Magento for increased user load

How can I configure Magento so that:
1) it can handle 10000 customers at a time
2) it can tolerate sudden increase in load
I searched Google but most of articles are explaining to improve Magento load time.
Where should I set the cookies and session expiration time?
I want to achieve this by modifying .htaccess ,php.ini and Magento admin panel setting.
Other ways are also welcome.
I have done extensive testing on this and have found the single biggest thing to do to improve performance is the following.
(all the following links can be found at http://www.magentocommerce.com/)
Make sure Magento Caching is enabled (easy to overlook when you have
it turned off while developing)
Use some sort of full page caching such as
magento-connect/zoom-full-page-cache-1742.html
Use a CDN such as MaxCDN or AWS Cloudfront (using
magento-connect/6274.html makes this pretty easy)
The above make the big improvements. If you need more improvements installing varnish really does the biggest but can be a pain to use since it is normally setup to take over port 80. This makes managing and developing your site later a bit of a pain as you will generally need to disable it or have it bypass varnish to do any major development work.
Install varnish - magento-connect/pagecache-powered-by-varnish.html
or magento-connect/2984.html
Make sure you have APC and Memcached installed
Make sure you have gzip compression turned on.
Advance performance method (these are helpful if using AWS and you want to use multiple servers in different zones)
Install varnish on it's own server and direct the web-server port to
your Magento server. This reduces the number of hits your web-server
sees.
Install your magento database on it's own server. Magento has funny resource requirements memory and cpu wise. What may be good for Magento might not be as good for the web server etc or database. Splitting your database off should be fine if you are one the same local net (i.e. AWS same region). This method allows you to use multiple web servers.
Use a AWS with elastic IP and place webservers in different zones and multiple web servers with a single database server. Use multiple varnish servers infront of the web servers.
Some additional notes:
APC, Memcache, php, using ngix will only provide about a 10% improvement vs just using Magento Cache, full page cache and Varnish. Also before testing make sure you test your server setup using a stock magento install with dummy data. This will help you set a baseline performance to see if hardware changes or need or to identify if a module or other plugin might be causing big performance hits. Sites like loadimpact.com can be helpful.
If you have access to php.ini then I am going to take a wild guess and assume you also have access to your database configuration files (my.conf).
In case you are using MySQL adjusting the query_cache_size parameter can have a tremendous positive effect on Magento performance because its constantly polling a large amount of the same data that gets reused. The exact amount of memory you'll use depends entirely on your needs so you will have to adjust it accordingly.
FPC is the most important point that should be applied.This will reduce your Mysql load considerably.
Solr for search is the second thing that should be done.
Make sure your code is to Magento standards so that it effectively uses magento cache and FPC.
Above three will result in Max optimization.
Look here : Tweaking magento for performance for lots of additional points.
There are so many ways to optimize Magento. Some of the configuration are from Magento admin panel it self.
Go to: System->Configuration->CATALOG/Catalog->Frontend
Use Flat Catalog Category: Yes
Use Flat Catalog Product : Yes
Go to: System->Configuration->ADVANCED/Developer: Merge javascript and CSS
Install memcache on you server
Some of the Helping links:
http://magento2x.com/speed-up-magento/
http://support.metacdn.com/entries/25027521-Slow-Magento-Speed-up-Magento-with-our-Magento-Optimization-guide
Also you can go for Google and find some more terrific configuration for server.
one simple way is to install and config APC
APC
# in php.ini:
extension=apc.so
[apc]
apc.enabled = 1
apc.cache_by_default = On
apc.shm_segments = 1
apc.shm_size = 128M ; memory size when using as nosql
apc.ttl = 60
apc.user_ttl = 7200
apc.gc_ttl = 600
apc.num_files_hint = 0
apc.write_lock = On
;apc.stat = 0 ; enable detecting file stat, reload if changed
I recommend to red this "White Paper: Optimizing Magento for Peak Performance"
This white paper documents the exceptional performance that can be achieved by properly optimizing and configuring Magento Enterprise Edition. The resulting optimization not only can contribute to higher conversion rates and support for greater numbers of customers and orders per day, it can also lead to improved hardware efficiency and overall cost savings.
https://info.magento.com/Optimizing_Magento_for_Peak_Performance.html

APC User-Cache suitable for high load environments?

We try to deploy APC user-cache in a high load environment as local 2nd-tier cache on each server for our central caching service (redis), for caching database queries with rarely changing results, and configuration. We basically looked at what Facebook did (years ago):
http://www.slideshare.net/guoqing75/4069180-caching-performance-lessons-from-facebook
http://www.slideshare.net/shire/php-tek-2007-apc-facebook
It works pretty well for some time, but after some hours under high load, APC runs into problems, so the whole mod_php does not execute any PHP anymore.
Even a simple PHP script with only does not answer anymore, while static resources are still delivered by Apache. It does not really crash, there is no segfault. We tried the latest stable and latest beta of APC, we tried pthreads, spin locks, every time the same problem. We provided APC with far more memory it can ever consume, 1 minute before a crash we have 2% fragmentation and about 90% of the memory is free. When it „crashes“ we don’t find nothing in error logs, only restarting Apache helps. Only with spin locks we get an php error which is:
PHP Fatal error: Unknown: Stuck spinlock (0x7fcbae9fe068) detected in
Unknown on line 0
This seems to be a kind of timeout, which does not occur with pthreads, because those don’t use timeouts.
What’s happening is probably something like that:
http://notmysock.org/blog/php/user-cache-timebomb.html
Some numbers: A server has about 400 APC user-cache hits per second and about 30 inserts per second (which is a lot I think), one request has about 20-100 user-cache requests. There are about 300.000 variables in the user-cache, all with ttl (we store without ttl only in our central redis).
Our APC-settings are:
apc.shm_segments=1
apc.shm_size=4096M
apc.num_files_hint=1000
apc.user_entries_hint=500000
apc.max_file_size=2M
apc.stat=0
Currently we are using version 3.1.13-beta compiled with spin locks, used with an old PHP 5.2.6 (it’s a legacy app, I’ve heard that this PHP version could be a problem too?), Linux 64bit.
It's really hard to debug, we have written monitoring scripts which collect as much data as we could get every minute from apc, system etc., but we cannot see anything uncommon - even 1 minute before a crash.
I’ve seen a lot of similar problems here, but by now we couldn’t find a solution which solves our problem yet. And when I read something like that:
http://webadvent.org/2010/share-and-enjoy-by-gopal-vijayaraghavan
I’m not sure if going with APC for a local user-cache is the best idea in high load environments. We already worked with memcached here, but APC is a lot faster. But how to get it stable?
best regards,
Andreas
Lesson 1: https://www.kernel.org/doc/Documentation/spinlocks.txt
The single spin-lock primitives above are by no means the only ones. They
are the most safe ones, and the ones that work under all circumstances,
but partly because they are safe they are also fairly slow. They are slower
than they'd need to be, because they do have to disable interrupts
(which is just a single instruction on a x86, but it's an expensive one -
and on other architectures it can be worse).
That's written by Linus ...
Spin locks are slow; that assertion is not based on some article I read online by facebook, but upon the actual facts of the matter.
It's also, an incidental fact, that spinlocks are deployed at levels higher than the kernel because of the very problems you speak of; untraceable deadlocks because of a bad implementation.
They are used by the kernel efficiently, because that's where they were designed to be used, locking tiny tiny tiny sections, not sitting around and waiting for you to copy your amazon soap responses into apc and back out a billion times a second.
The most suitable kind of locking (for the web, not the kernel) available in APC is definitely rwlocks, you have to enable rwlocks with a configure option in legacy APC and it is the default in APCu.
The best advice that can be given, and I already gave it, is don't use spinlocks, if mutex are causing your stack to deadlock then try rwlocks.
Before I continue, your main problem is you are using a version of PHP from antiquity, which nobody even remembers how to support, in general you should look to upgrade, I'm aware of the constraints on the OP, but it would be irresponsible to negate to mention that this is a real problem, you do not want to deploy on unsupported software. Additionally APC is all but unmaintained, it is destined to die. O+ and APCu are it's replacement in modern versions of PHP.
Anyway, I digress ...
Synchronization is a headache when you are programming at the level of the kernel, with spinlocks, or whatever. When you are several layers removed from the kernel, when you rely on 6 or 7 bits of complicated software underneath you synchronizing properly in order that your code can synchronize properly synchronization becomes, not only a headache for the programmer, but for the executor too; it can easily become the bottleneck of your shiny web application even if there are no bugs in your implementation.
Happily, this is the year 2013, and Yahoo aren't the only people able to implement user caches in PHP :)
http://pecl.php.net/package/yac
This is an, extremely clever, lockless cache for userland PHP, it's marked as experimental, but once you are finished, have a play with it, maybe in another 7 years we won't be thinking about synchronization issues :)
I hope you get to the bottom of it :)
Unless you are on a freebsd derived operating system it is not a good idea to use spinlocks, they are the worst kind of synchronization on the face of the earth. The only reason you must use them in freebsd is because the implementer negated to include PTHREAD_PROCESS_SHARED support for mutex and rwlocks, so you have little choice but to use the pg-sql inspired spin lock in that case.

Magento performance question

I have an installation of 1.3.2.4 running two Store Views and 2,734 products. The site sees around 15,000 visits a month.
Apache and MySQL (mostly Apache) hovers at around 1.5 GB RAM usage most of the time and peaks over 3 GB. My questions is, considering the stats, is this normal? Seems like a lot.
If that memory usage is in fact abnormal, would an upgrade to 1.4.1.1 help?
If you consider your stores, then you are doing just fine. But regarding the traffic you're getting, it seems that you need to provide some extra features to Magento to let it fire up. For this, you can have some of the following:-
Install APC (Alternative PHP Cache) or XCache (or any other alternative) and configure the use of it in your Magento back-end. It dramatically increases the speed of Magento.
You can have Magento's cache stored in memory (tmpfs in Linux).
You can also tell Magento to save sessions into Memcache so that your sessions are in memory & distributed.
Check your Magento's Index Management section for any requirement of indexes, every month or bi-monthly. If you do find any indexing required, then do it immediately & clear the cache from your Cache Management.
Check your database every week or bi-monthly for any overhead in any of your database's tables. If you do find any overhead, then "optimize" those tables immediately.
Try reading some of these articles, to know more about these.
Also, upgrading to 1.4.1.1 will help you out in terms of features provided by Magento. But for performance, I think it will be best to wait for some more time, until Magento releases its version 2 in the market, in which some performance issues may be taken care of by Magento.
Hope it helps.
1.3.2.4 is a good stable release, upgrading to 1.4.0.1 is very painless and will give you the added benefit of split index management and much faster administration area (mass attribute update is fixed).
Don't be overly concerned about memory usage, depending on the number of Apache modules you have loaded, you should expect to see about 30MB per child. As long as your not swapping or encroaching your limits, you shouldn't have any real concerns over how much is being consumed. Disabling unused modules will help cut down memory - but to be honest, not by any noticeable margin.
You could always throw Nginx in front as a reverse proxy to serve static content requests and pipe PHP/dynamic reqs. back to Apache. That way you can keep the modular Apache build with .htaccess support and cut down your memory overheads significantly.
However, this could do with more information, such as the output from
free -m
To see how some of the memory is being allocated.
I'd probably suggest downloading tuning-primer.sh to run on your MySQL config. It will give a good (entry-level) indication of how efficient your memory allocation is.
Those stats look quite typical for Magento, if you consider a single hit/page load can use upwards of 64MB RAM.
Your Apache settings can also drastically effect the amount of RAM your system uses. Upgrading your Magento installation may give some small performance boost, but don't expect it to do much for memory consumption etc.
If your memory consumption is a real issue for you then you have several possible routes to reduce resource usage, such as:
Install Nginx as a reverse caching proxy to apache (apache is a hog and is poor serving static content).
Use Nginx + PHP Fast CGI and remove apache
Try using worker MPM module for apache, or Fast CGI.
Install caching proxy such as Varnish/Squid.
If you are stuck with apache you can tweek KeepAlive and other settings to allow you to reduce memory usage
Tweek MySQL settings, such as query caching to imporove resource usage / performance
I have found 1. to work very well in reducing cpu/memory usage as it will allow Nginx to serve static images etc without requiring apache to hog RAM trying to serve them.

Why is this Zend framework killing my CPU usage and loading pages so slow

The framework I am using is called SocialEngine.net v4, and it's completely written in Zend, so it's insanely super CPU intensive. SocialEngine is in PHP and uses MySQL.
I need to know what OS, what hardware you suggest (dual xeons, amd, how much ram, etc...) and how to optimize it properly to handle large amounts of traffic.
I only have 11k users right now, and it's running incredibly slow, I'm talking 7 second page load times.
The framework however does have memcached, and apc options for caching installed, but even with APC or Memcache on, it doesn't make a big enough difference...
I need to know what the best way to attack this is as far as optimizing mysql, inoodb tweaks, apache tweaks, any performance tweaks, what type of hardware, and amount of ram.
I have a very big marketing plan in place, and will probably start increasing traffic by 1,000+ signups per day... So traffic will start to rise very progressively. When I initially marketed, I did 50k uniques in 6 hours, 20k signups, and 500k pageviews... (server crashed, lost half my users... and haven't marketed since, because I been trying to rebuild)
You could start with xdebug to profile your application and find the bottleneck
Honestly? And this is just my opinion, instead of spending a small fortune on a single server - buy many small servers and load balance them. Mac Mini's are wonderful for this and are capable of running their standard OS X or Linux if you choose. You will get way more performance out of 10 small $500 machines than you will out of 1 $5000 machine.
You don't provide us with any information about your set up.
How many servers do you have? What services are they running?
When you say APC and Memcached is on, have you actually set them up to actually work?
How many connections does your Apache allow for?
What is your MySQL configuration like? Is the memory settings optimised? Most importantly are all your tables indexed properly? Have you checked your slow query log? Did you run EXPLAIN for your queries?
ZF wise, are you caching your table metadata? Are you caching tables that don't change so that you save network traffic? Have you checked the official ZF optimisation guide?
Also... Why did you assume that ZF is killing your CPU usage?

Categories