Php high memory usage - php

We have an old facebook app, running smoothly written in native php.
This month we decided to rewrite it in zend-framework 2. Yesterday, after switching to new app it crashed our server with lots of out of memory errors. So we turned back to old app.
I installed xdebug to profile the app. Using memory_get_peak_usage() function i noticed high memory usage.
In the old app, a static page uses only 1 mb memory. But the new one using 7-8 mb approximately on the same page.
Here's the top two rows from webgrind:
Function Invocation Count Total Self Cost Total Inclusive Cost
Composer\Autoload\ClassLoader->loadClass 224 23.31 47.20
Composer\Autoload\ClassLoader->findFile 224 9.57 10.23
Also tried tha apache's ab tool
ab -n 50 -c 5 -C PHPSESSID=SESSIONID http://myhost.com
Result is:
Percentage of the requests served within a certain time (ms)
50% 368
66% 506
75% 601
80% 666
90% 1073
95% 1812
98% 2278
99% 2278
100% 2278 (longest request)
All these results from the production server not localhost.
Is 7-8 mb for a single page normal? If not, how can i reduce it? Should i look for it in zf2 or composer?
I can give code samples if you need. Thank you.

When you migrate a solution from native to Zend, you must be aware of the way Zend works.
Zend is composed of lot of classes, and the memory used increase while you use Objects instead of native/light structures.
To improve memory use, review your code and do the following :
wrap some code in functions, it helps Garbage collector to remove unused objects from memory.
Don't store large lists of object in arrays before printing them, juste print on the fly.
Limit the creation of objects (calls to 'new') in loops.
Hop this helps.

I spent a day to figure out problem. Tried xdebug, xhprof. There was no problem in the code.
We switched back to 2.0.0 and problem solved. I don't know what is wrong with new versions, for now stick with the 2.0.0.
Overall memory usage is around 4mb, no crashes.
composer.json:
"zendframework/zendframework": "2.0.0",

Related

Profiling PHP 7 application with blackfire.io, why i see different cpu time for same code?

I support and rework legacy PHP7 application. This application don't use autoloading, instead of this aplication require all classes in single file, just going round all directories with classes. Old team thought that it is great optimization way.
So i did some tests with blackfire, and found that this including can consume half of cpu's time. I did experiments with autoload and reduce cpu time in two times and memory consumption in three times. Great result.
Then i continued experiments with old code without autoload and found that in some cases mega-including does not consume a lot of time (and it's flat file with a lot of loops, no if-else statements).
In blackfire panel this looks like that file with includes have very distinguished numbers of callees. In one case this file have 21 callees and work 132 ms, in another case its 6 callees and 2.63 ms.
And i dont understand reason of such difference. My assumption is that PHP7 very smart and can analyze which classes really should be compiled in byte code and which not.
Does somebody know why such difference exists?
PS: I can't show blackfire reports, PM don't allow this.
Without the code or reports is hard to guess. But I would take a look to this page. Maybe things that are explained there are present in the code that you are reviewing.
https://blackfire.io/docs/24-days/22-php-internals

PHP 5.5, under what circumstances will PHP cause very high committed memory

I am trying to figure out a situation where PHP is not consuming a lot of memory but instead causes a very high Committed_AS result.
Take this munin memory report for example:
As soon as I kick off our Laravel queue (10 ~ 30 workers), committed memory goes through the roof. We have 2G mem + 2G swap on this vps instance and so far there are about 600M unused memory (that's about 30% free).
If I understand Committed_AS correctly, it's meant to be a 99.9% guarantee no out of memory issue given current workload, and it seems to suggest we need to triple our vps memory just to be safe.
I tried to reduce the number of queues from 30 to around 10, but as you can see the green line is quite high.
As for the setup: Laravel 4.1 with PHP 5.5 opcache enabled. The upstart script we use spawn instance like following:
instance $N
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/laravel_queue.$N.pid --chuid $USER --chdir $HOME --exec /usr/bin/php artisan queue:listen -- --queue=$N --timeout=60 --delay=120 --sleep=30 --memory=32 --tries=3 >> /var/log/laravel_queue.$N.log 2>&1
I have seen a lot of cases when high swap use imply insufficient memory, but our swap usage is low, so I am not sure what troubleshooting step is appropriate here.
PS: we don't have this problem prior to Laravel 4.1 and our vps upgrade, here is an image to prove that.
Maybe I should rephrase my question as: how are Committed_AS calculated exactly and how does PHP factor into it?
Updated 2014.1.29:
I had a theory on this problem: since laravel queue worker actually use PHP sleep() when waiting for new job from queue (in my case beanstalkd), it would suggest the high Committed_AS estimation is due to the relatively low workload and relatively high memory consumption.
This make sense as I see Committed_AS ~= avg. memory usage / avg. workload. As PHP sleep() properly, little to no CPU are used; yet whatever memory it consumes is still reserved. Which result in server thinking: hey, you use so much memory (on average) even when load is minimal (on average), you should be better prepared for higher load (but in this case, higher load doesn't result in higher memory footprint)
If anyone would like to test this theory, I will be happy to award the bounty to them.
Two things you need to understand about Committed_AS,
It is an estimate
It alludes how much memory you would need in a worst case scenario (plus the swap). It is dependent on your server workload at the time. If you have a lower workload then the Committed_AS will be lower and vice versa.
If this wasn't an issue with the prior iteration of the framework queue and provided you haven't pushed any new code changes to the production environment, then you will want to compare the two iterations. Maybe spin up another box and run some tests. You can also profile the application with xdebug or zend_debugger to discover possible causal factors with the code itself. Another useful tool is strace.
All the best, you're going to need it!
I have recently found the root cause to this high committed memory problem: PHP 5.5 OPcache settings.
Turns out putting opcache.memory_consumption = 256 cause each PHP process to reserve much more virtual memory (can be seen at VIRT column in your top command), thus result in Munin estimating the potential committed memory to be much higher.
The number of laravel queues we have running in background only exaggerate the problem.
By putting opcache.memory_consumption to the recommended 128MB (we really weren't using all those 256MB effectively), we have cutted the estimating value in half, coupled with recent RAM upgrade on our server, the estimation is at around 3GB, much more reasonable and within our total RAM limit
Committed_AS is the actual size that the kernel has actually promised to processes. And queues runs independently and has nothing to do with PHP or Laravel. In addition to what Rijndael said, I recommend installing New Relic which can be used to find out the problem.
Tip: I've noticed a huge reduction in server-load with NginX-HHVM combination. Give it a try.

What is a normal amount of memory for a Wordpress script to use?

I'm trying to troubleshoot a memory issue I've run into with Wordpress and rather than bore you with the whole problem I was hoping to get a nice compact answer to three parts of my problem:
Normal Memory Footprint. I know there is no real "normal" Wordpress script and yet I think it would be quite useful to hear from people what a typical Wordpress script's memory footprint is. Let's call "normal" for sake of argument as a installation with very few plugins, a base type theme like twenty-twelve, and a script that has some DB retrieval but nothing monumental ... maybe a typical blog roll page or something. What I'm trying to understand is what is the baseline memory footprint (a range not a discrete number) that a more complicated script would be starting from?
Memory Ceiling Versus memory_get_usage(). I have been putting lots of logging in my scripts that pull out the memory usage by using PHP's memory_get_usage(true) call. This seems like one of the few troubleshooting techniques that determine where the memory is being used but what perplexes me is that I see memory usage ranging from 15M to 45M at the script level -- note this is with the "true" parameter so this includes the overhead of the memory manager - and yet in many instances I'll see a 27M script all of a sudden fall over with the message that the "Allowed memory size of 268435456 bytes exhausted". It is possible that maybe there is one very large memory request that takes place after the logging but I'm interested to hear if other people have found any differences between the memory limit and the memory reported by memory_get_usage()?
New Memory Ceiling Ignored. In a desperate attempt to get the site back to working -- and buy me time to troubleshoot -- I thought I'd just up the memory limit in the php.ini file to 512M but doing this seems to have had no impact. The fatal error continues to talk about the old 256M limit.
Any help would be appreciated. Thanks in advance.
Hopefully someone can answers your question so detailed. By my side:
Q: What is a normal amount of memory for a Wordpress script to use?
A1.- As a WP is a plugin driven CMS, memory depends on these plugins. As you must know there exists very bad coded ones. But an out-of-box WP has a very good performance.
A2.- To try helping you to find bottlenecks I recommend you to use BlackBox (WordPress Debug Bar Plugin )
... As for information you will find in profiler, these are: time passed
since profiler was started and total memory WordPress was using when
checkpoint was reached ...
I just found this interesting article:
WordPress Memory Usage & Website Outage Issues Resolved.
I ran a test for Wordpress 4.4 with a clean install on a windows 7 PC (a local install).
Memory Used / Allocated:
9.37 MB / 9.5 MB
Total Files: 89
Total File Size: 2923.38 KB
Ran in 1.27507 seconds
This was all done in the index file, timing before anything is called and memory / file usage after everything is 100% finished.
I tried a few pages (category, archive, single post, etc..) and all were very similar (within 1% difference) in files and memory usage.
I think it stands to reason this would be the best possible performance, so adding plugins /content will only bump these numbers up. May be possible a caching plugin would offer a little better performance though.

Drupal site requires a higher memory limit after migration? Why?

We have a website which had a previous memory limit of 12 MB (12 MB in php.ini, and 16 MB in settings.php) and worked previously.
After moving to a new server it started giving memory limit errors and displaying half-blank screen.
We increaded the limit in both files (php.ini and settings.php) and now it works, but I dont understand how is it possible that now it needs a considerably larger amount of memory (it used to work with 12 MB, now it cont work with less than 20 MB).
I assume you did not change the OS in the process. Moving from Windows to Linux or vice versa is quite likely to change resource usage.
And this is a long shot, but perhaps you moved from a 32-bit system to a 64-bit one? This would slightly increase memory usage as addresses (pointers) are twice as large on 64-bit architectures, and code with lots of small objects uses plenty of pointers.
On the whole though, we can't tell you much without seeing what changed about the system.
12 is too low, if you don't use only drupal as it is. Higher is recommend, than more modules you will install, usually 96MB is enough with image processing...
12 MB is really very low. I would tend to ignore it and go on.
Ideas what could have changed, though:
The old server could have had modules installed that reduced memory usage, e.g. memcache
The new server may have to rely on GD library for image processing, while the old server maybe had ImageMagick (which is an external tool and doesn't count towards the memory limit)

Find out where your PHP code is slowing down (Performance Issue)

Here's my first question at SO.
I have a internal application for my company which I've been recently ask to maintain. The applications is built in PHP and its fairly well coded (OO, DB Abstraction, Smarty) nothing WTF-ish.
The problem is the applications is very slow.
How do I go about finding out what's slowing the application down? I've optimized the code to make very few DB queries, so I know that it is the PHP code which is taking a while to execute. I need to get some tools which can help me with this and need to devise a strategy for checking my code.
I can do the checking/strategy work myself, but I need more PHP tools to figure out where my app is crapping up.
Thoughts?
I've used XDebug profiling recently in a similiar situation. It outputs a full profile report that can be read with many common profiling apps ( Can't give you a list though, I just used the one that came with slackware ).
As Juan mentioned, xDebug is excellent. If you're on Windows, WinCacheGrind will let you look over the reports.
Watch this presentation by Rasmus Lerdorf (creator of PHP). He goes into some good examples of testing PHP speed and what to look for as well as some internals that can slow things down. XDebug is one tool he uses. He also makes a very solid point about knowing what performance cost you're getting into with frameworks.
Video:
http://www.archive.org/details/simple_is_hard
Slides (since it's hard to see on the video):
http://talks.php.net/show/drupal08/1
There are many variables that can impact your application's performance. I recommend that you do not instantly assume PHP is the problem.
First, how are you serving PHP? Have you tried basic optimization of Apache or IIS itself? Is the server busy processing other kinds of requests? Have you taken advantage of a PHP code accelerator? One way to test whether the server is your bottleneck is to try running the application on another server.
Second, is performance of the entire application slow, or does it only seem to affect certain pages? This could give you an indication of where to start analyzing performance. If the entire application is slow, the problem is more likely in the underlying server/platform or with a global SQL query that is part of every request (user authentication, for example).
Third, you mentioned minimizing the number of SQL queries, but what about optimizing the existing queries? If you are using MySQL, are you taking advantage of the various strengths of each storage system? Have you run EXPLAIN on your most important queries to make sure they are properly indexed? This is critical on queries that access big tables; the larger the dataset, the more you will notice the effects of poor indexing. Luckily, there are many articles such as this one which explain how to use EXPLAIN.
Fourth, a common mistake is to assume that your database server will automatically use all of the resources available to the system. You should check to make sure you have explicitly allocated sufficient resources to your database application. In MySQL, for example, you'll want to add custom settings (in your my.cnf file) for things like key buffer, temp table size, thread concurrency, innodb buffer pool size, etc.
If you've double-checked all of the above and are still unable to find the bottleneck, a code profiler like Xdebug can definitely help. Personally, I prefer the Zend Studio profiler, but it may not be the best option unless you are already taking advantage of the rest of the Zend Platform stack. However, in my experience it is very rare that PHP itself is the root cause of slow performance. Often, a code profiler can help you determine with more precision which DB queries are to blame.
Also You could use APD (Advanced PHP Debugger).
It's quite easy to make it work.
$ php apd-test.php
$ pprofp -l pprof.SOME_PID
Trace for /Users/martin/develop/php/apd-test/apd-test.php
Total Elapsed Time = 0.12
Total System Time = 0.01
Total User Time = 0.07
Real User System secs/ cumm
%Time (excl/cumm) (excl/cumm) (excl/cumm) Calls call s/call Memory Usage Name
--------------------------------------------------------------------------------------
71.3 0.06 0.06 0.05 0.05 0.01 0.01 10000 0.0000 0.0000 0 in_array
27.3 0.02 0.09 0.02 0.07 0.00 0.01 10000 0.0000 0.0000 0 my_test_function
1.5 0.03 0.03 0.00 0.00 0.00 0.00 1 0.0000 0.0000 0 apd_set_pprof_trace
0.0 0.00 0.12 0.00 0.07 0.00 0.01 1 0.0000 0.0000 0 main
There is a nice tutorial how to compile APD and make profiling with it : http://martinsikora.com/compiling-apd-for-php-54
phpED (http://www.nusphere.com/products/phped.htm) also offers great debugging and profiling, and the ability to add watches, breakpoints, etc in PHP code. The integrated profiler directly offers a time breakdown of each function call and class method from within the IDE. Browser plugins also enable quick integration with Firefox or IE (i.e. visit slow URL with browser, then click button to profile or debug).
It's been very useful in pointing out where the app is slow in order to concentrate most coding effort; and it avoids wasting time optimising already fast code. Having tried Zend and Eclipse, I've now been sold on the ease of use of phpED.
Bear in mind both Xdebug and phpED (with DBG) will require an extra PHP module installed when debugging against a webserver. phpED also offers (untried by me) a local debugging option too.
Xdebug profile is definitely the way to go. Another tip - WincacheGrind is good, but not been updated recently. http://code.google.com/p/webgrind/ - in a browser may be an easy and quick alternative.
Chances are though, it's still the database anyway. Check for relevant indexes - and that it has sufficient memory to cache as much of the working data as possible.
ifs its a large code base try apc if you're not already.
http://pecl.php.net/package/APC
you can also try using the register_tick_function function in php. which tells php to call a certain function periodcally through out your code. You could then keep track of which function is currently running and the amount of time between calls. then you could see what's taking the most time.
http://www.php.net/register_tick_function
We use Zend Development Environment (windows). We resolved a memory usage spike yesterday by stepping through the debugger while running Process Explorer to watch the memory/cpu/disk activity as each line was executed.
Process Explorer: http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx.
ZDE includes a basic performance profiler that can show time spent in each function call during page requests.
I use a combination of PEAR Benchmark and log4php.
At the top of scripts I want to profile I create an object that wraps around a Benchmark_Timer object. Throughout the code, I add in $object->setMarker("name");calls, especially around suspect code.
The wrapper class has a destroy method that takes the logging information and writes it to log4php. I typically send this to syslog (many servers, aggregates to one log file on one server).
In debug, I can watch the log files and see where I need to improve things. Later on in production, I can parse the log files and do performance analysis.
It's not xdebug, but it's always on and gives me the ability to compare any two executions of the code.
You can also look at the HA Proxy or any other load balancing solution if your server degraded performance is the cause of the application slow processing. server.

Categories