Yii App Reaches Memory Limit On Linux Server - php

I have a really strange problem and I dont know how to solve it. My application reaches memory limit from time to time and Internal server error 500 happens. I have limit of 570MB on shared hosting. I tried to debug my application and YiiDebug Toolbar shows that every page is consuming about 10-12MB of memory. I dont really get where is the problem. On my local wamp server, there are no problems.
Can anyone help me? At least tell me where to start looking for memory leaks because I dont see any obvious.
This is unpredictable problem, it does not happen on some particular request.
I already commented 'YII_DEBUG' line in index.php.

If you've got a lot of AR records, you might also look into the brand new CActiveDataProviderIterator that just landed on master. It's not part of a stable Yii release yet and there is little documentation (I'm helping work on that now actually). But could be a place where you're hitting some memory limits.
And are you using GiiX by any change? I've found it's fairly inefficient in some places, leading to the need to query more leanly ...

See the posting at - http://www.yiiframework.com/forum/index.php/topic/15647-memory-usage/
A bit outdated but the points are still relevant.
If you can, use some kind of caching software to compliment the Active Record system.
If you are using Active Record, then ensure that the amount of models being loaded is not too much.
Debugging takes additional memory - if you do not need it disable it.
If the problem still persists consider moving from active record to DAO, but this can be messy.
What version of Yii are you using? and what is the number of typical visitors on your site?

Related

Static caching for PHP (Apache)?

I am installing a pre-build php-based web application for a client. Unfortunately the application performs very slowly because it compiles lots of data.
Page load times go up to 40 sec.
I know about ob_caching but I don't want to mess with the application unless it is absolutely necessary.
Are there any tools/scripts/apache modules to cache the entire output of the application statically one the server and update it on a regular basis.
I am just looking for a middleware or something which build regular static html pages form the php application. (BTW: I tried eaccelerator, but it didn't improve the situation.)
I would appreciate any tips.
Thanks in advance.
eAccelerator should have made a measurable difference, so are you sure it was installed correctly? You should have seen an eaccelerator section in phpinfo() showing that the cache was full. You may also have ahd the cache set too small etc. Alternatively, try APC instead. If neither show any performance improvement, you may have a server issue.
In any case, 40 seconds is crazy slow for anything. Are you sure this is PHP and not poorly optimised SQL queries?
Looks like this should do the trick (but YMMV, depending on your application):
http://httpd.apache.org/docs/2.2/mod/mod_cache.html

WordPress on IIS 7 php-cgi hogging CPU

Running WordPress on IIS 7 (Windows Server 2008) with WP-SuperCache as per IIS.net's guide.
Was running great but recently we changed the permissions on some folders and the administrator password and we're getting huge spikes in our CPU usage as a result of the PHP-cgi.exe processes.
This leads me to believe it's not caching however the pages themselves have the "Cached with WP-SuperCache" comments at the bottom, and the caching seems to be working correctly.
What else could be the issue here?
I think I may have found a solution or at least a work round to this problem, at least it seems to be working for me reliably.
Try setting the Max Instances setting, under IIS Server --> FastCGI Settings, to 1.
It seemed to me that only certain requests were causing a php-cgi.exe process to go rogue and hog the cpu, usually when updating a post. When reading other posts on this issue one of them mentioned the Max Instances setting and that it is set to default at 0 or automatic. I wondered if this might not have a good effect when things aren't as they should be. I'm guessing (but this isn't quite my field of expertise) if a certain request(s) is causing the process to lock-up, so FastCGI just creates another, whilst leaving the first in place. Somehow it seems only having a single instance allows PHP to move on from the lock-up and the cpu stays under control.
For servers with high-levels of requests setting FastCGI to only a single instance may not be ideal, but it certainly beats the delays I was getting before. Used in combination with WP-SuperCache and WinCache, things seem to nipping along nicely now.
Looking at that task mgr looks like its missing the cache on every request. Plus that article dates to 2008 so difficult to say whether the directions as written would still work. Something with WP-SuperCache could have changed.
I would recommend using W3 Total Cache. I've done extensive testing with it on Windows Server 2008 and IIS 7 and it works great. It is also compatible with and leverages the WinCache extension for PHP. Has some other great features too if you're interested, minification, CDN support, etc. It's a really great performance plugin for WordPress. You can get the plugin here, http://wordpress.org/extend/plugins/w3-total-cache/
some other things to check...
What size is the app pool? (# of processes?)
Make sure you are using PHP 5.3.
Make sure you are using WinCache.
Make sure to set MaxInstanceRequests to something less than PHP_FCGI_MAX_REQUESTS. Definitely do not allow PHP to handle recyling the app pool. The default is 10K requests. If you are seeing these results during a load test then this might be the cause. Increase MaxInstanceRequests and keep it one less than PHP_FCGI_MAX_REQUESTS.
Hope that helps.

How to make Symfony faster

My symfony projects work veeery slow, between 4 and 8 seconds per page
(I'm working in dev environment). I've tried to use PHP APC (with it, it works slower), I've tried to optimize my code, I`ve tried to explore standard symfony library, etc. But nothig helped me.
P.S. I have a good hardware, i`m sure that problem isn't in it.
Find where your bottle neck is in your application.
Most likely, this won't be a Symfony issue, but something you have done or a way you have done it.
Install XDebug, profile your application, then analyze the results to figure out what is taking all the relative time to compute.
You will probably see something taking like 98% of the relative time, and it will probably be something obscure like a timeout request etc.
Use the profiler to find the bottleneck. There is no way we can determine which part of your application is slow.
You will have to run some benchmark yourself by logging the time at key execution points to drill down into the code until you find the place of the slowness.

How to optimize this website. Need general advices

this is my first question here, which is regarding a specific website optimization.
A few moths ago, we launched [site] for one of our clients which is some kind of community website.
Everything works great, but now this website is getting bigger and it shows some slowness when the pages are loading.
The server specs:
PHP 5.2.1 (i think we need to upgrade on 5.3 to make use of the new garbage collector)
Apache 2.2
Quad Core Xeon Processor # 2,8 Ghz and 4 GB DDR 3 RAM.
XCACHE 1.3 (we added this a few months ago)
Mysql 5.1 (we are using innodb as engine)
Codeigniter framework
Here is what we did so far and what we intend to do further :
Beside xcache, we don't really use a caching mechanism because most of the content comes live and beside this, we didn't wanted to optimize prematurely because we didn't know what to expect as far as the traffic flow.
On the other hand, we have installed memcached and we want to implement a cache system based on memcached.
Regarding the database structure, we have reached 3NF with most of our tables, and yes we have some slow queries(which we plan to optimize) but i think because the tables that produce slow queries are the one for blog comments(~44,408 rows) / user logs tracking (~725,837 rows) / user comments (~698,964 rows) etc which are quite big tables. The entire database is 697.4 MB in size for now.
Also, here are some stats for January 2011:
Monthly unique visitors: - 127.124
Monthly unique views: 4.829.252
Monthly unique visits: 242.708
Daily average:
Unique new visitors: 7.533
Unique new views : 179.680
Just let me know if you need more details.
Any advice is highly appreciated.
Thank you.
When it come to performance issue, there is no golden rule or labelled sticky note that first tell that is related to database. Maybe what i could suggest is to do performance profiling and there are many free and paid tools over the Internet that allows you to do so.
First start of with web server layer, make sure everything is done correctly and optimized as what is be possible.
Then move on to next layer (which i assume is your database). Normally from layman perspective whenever someone mentioned InnoDB MySQL, we assume there are indexes being created to optimize and search operations. The usage of indexes also quite important because you don't want to indexing something wrong and make things worse. My advise to this is to get a DBA equivalent personnel to troubleshoot using a staging environment.
Another tricks you could possibility look at is the contents, from web page contents to database data, make sure you show/keep data where is needed only, do no store unnecessary information into database and using smart layout on the webpage. A cut down of a seconds or two might do a big difference in terms of usability and response time.
It is very hard to explain the detail here unless we have in-depth information about your application, its architecture and your environment, but above are some commonly used direction people use to troubleshoot such incident.
Good luck!
This site has excellent resources http://www.websiteoptimization.com/
The books that are mentioned are excellent. There are just too many techniques to list here and we do not know what you have tried so far.
Sorry for the delay guys, i have been very busy to find the issue and i did it.
Well, the problem was because of apache mostly, i had an access log of almost 300 GB which at midnight was parsed to generate webalizer stats. Mostly when this was happening the website was very very slow. I disabled webalizer for the domain, cleared the logs, and what to see, it is very fast again, doesn't matter the hour you access it.
I now only have just a few slow queries that i tend to fix today.
I also updated to CI 2.0 Reactor as suggested and started to use the memcached driver.
Who would knew that apache logs can be so problematic...
Based on the stats, I don't think you are hitting load problems... on a hunch, I would look to the database first. Database partitioning might be a good place to start.
But you should really do some profiling of your application first. How much time is spent in the application versus database. Are there application methods that are using lots of time and just need some tweaking? Are database queries not written efficiently? Do you need more or better database indices?
Everything looks pretty good-- if upgrading codeigniter is an option, the new codeigniter 2.0 (reactor) adds support for memcache (New Cache driver with file system, APC and memcache support). Granted you're already using xcache, these new additions may be worth looking at.
When cache objects weren't enough for our multi-domain platform that saw huge traffic, we went the route of throwing more hardware at it-- ram, servers/database. Then we moved to database clustering to handle single account forecasted heavy load. And now switching from apache to nginx... It's a never ending battle, but what worked for us was being smart about what we cached and increasing server memory then distributing this load across servers...
Cache as many database calls as you can. In my CI application I have a settings table that rarely changes, so I cache all calls made to it as I am constantly querying the settings table.
Cache your views and even your controllers as well. I tend to cache basically as much as I can in my CI applications and then refresh the cache when a file changes.
Only autoload important libraries, models and helpers. I've seen people autoload up to 10 libraries and on-top of that a few helpers and then a model. You only really need to autoload the database and session libraries if you are using them.
Regarding point number 3, are you autoloading many things in your config/autoload.php file by any chance? It might help speed things up only loading things you need in your controllers as you need them with exception of course the session and database libraries.

Httpd Process High memory usage and slow page loads

I am running wampserver on my windows vista machine. I have been doing this for a long time and it has been working great. I have completed loads of projects with this setup.
However, today, without me changing anything (no configuration etc) only PHP code changes, I find that every time I load pages of my site (those with user sessions or access the database) are really slow to load - Over 30 seconds, they use to take 1 or 2 seconds.
When I have a look at the task manager, I can see on page loads the httpd process jumps from 10mb to 30mb, 90mb, 120mb, 250mb and then back down again.
I have tested previous php code projects and they seem to all be slow as well!
What is going on?
Thanks all for any help on this confusion issue!
Check the following:
Check if you your data-access library to access your database has been changed/updated lately (if you use one).
Just a guess, but did you changed your antivirus/firewall (or settings) since last time you checked those previous projects? A more aggresive security can slow things a lot.
Did you changed the Apache/PHP/MySQL version in the WAMPSERVER menu?
Maybe you can try to reinstall WAMPSERVER (do this last and if it's not an hassle for you because I really doubt this will help but it can in some really really weird cases).
But from experience and the memory usage you explain in your question it seems that your SQL queries are long to execute and/or return a really large data set.
Try to optimise your queries, it can help for speed but not really memory usage (at least if the result set is the same). For the memory, maybe you can use LIMIT to reduce your returning data set (if your design allows it - but it should).
Since we don't really know what you do with your data, take note than "playing" (like parsing large XML documents) with large data sets can take much time/memory (again it depends much on what you do with all this data).
Bottom line, if nothing in this post helps, try to post more information on your setup and what exactly you do (with even code samples) when it's slow.
Try checking the size of your wamp log files.
i.e.
C:\wamp\logs
Sometimes, when they get really big, they can cause Apache to slow down.
Have you recently changed your network configuration or upgraded your system? That may be causing this issue through your network config or anti-virus/security software. People have had issues with zonealarm causing this in the past, for example.
Also, if you've recently switched from typing "127.0.0.1" to "localhost" or moved around networks, you may benefit from removing the IPV6 localhost setting from C:\Windows\System32\drivers\etc\hosts if you have one:
change the line with
::1 localhost
to
# ::1 localhost
I am surprised that no one has suggested this. You should always try to see why things are really slowing down:
Use Performance monitor to see where the bottleneck is, first. (perfmon.exe)
Is hard page fault actually the bottleneck? Is your hard drive busy reading/writing to the pagefile? Check the length of IO queue for the hard disk.
Are CPUs busy?
If nothing looks busy, use procmon to see if your php process is blocked on some system calls.
Hope this helps.
though not related to finding database bottlenecks, XDebug in combination with a cachegrind viewer (e.g. WebGrind, WinCacheGrind) can help you find the part of your PHP code, that takes longest to execute.

Categories