In a system I am currently working on, there is one process that loads large amount of data into an array for sorting/aggregating/whatever. I know this process needs optimising for memory usage, but in the short term it just needs to work.
Given the amount of data loaded into the array, we keep hitting the memory limit. It has been increased several times, and I am wondering is there a point where increasing it becomes generally a bad idea? or is it only a matter of how much RAM the machine has?
The machine has 2GB of RAM and the memory_limit is currently set at 1.5GB. We can easily add more RAM to the machine (and will anyway).
Have others encountered this kind of issue? and what were the solutions?
The configuration for the memory_limit of PHP running as an Apache module to server webpages has to take into consideration how many Apache process you can have at the same time on the machine -- see the MaxClients configuration option for Apache.
If MaxClients is 100 and you have 2,000 MB of RAM, a very quick calculation will show that you should not use more than 20 MB *(because 20 MB * 100 clients = 2 GB or RAM, ie the total amount of memory your server has)* for the memory_limit value.
And this is without considering that there are probably other things running on the same server, like MySQL, the system itself, ... And that Apache is probably already using some memory for itself.
Or course, this is also a "worst case scenario", that considers that each PHP page is using the maximum amount of memory it can.
In your case, if you need such a big amount of memory for only one job, I would not increase the memory_limit for PḦP running as an Apache module.
Instead, I would launch that job from command-line (or via a cron job), and specify a higher memory_limit specificaly in this one and only case.
This can be done with the -d option of php, like :
$ php -d memory_limit=1GB temp.php
string(3) "1GB"
Considering, in this case, that temp.php only contains :
var_dump(ini_get('memory_limit'));
In my opinion, this is way safer than increasing the memory_limit for the PHP module for Apache -- and it's what I usually do when I have a large dataset, or some really heavy stuff I cannot optimize or paginate.
If you need to define several values for the PHP CLI execution, you can also tell it to use another configuration file, instead of the default php.ini, with the -c option :
php -c /etc/phpcli.ini temp.php
That way, you have :
/etc/php.ini for Apache, with low memory_limit, low max_execution_time, ...
and /etc/phpcli.ini for batches run from command-line, with virtually no limit
This ensures your batches will be able to run -- and you'll still have security for your website (memory_limit and max_execution_time being security measures)
Still, if you have the time to optimize your script, you should ; for instance, in that kind of situation where you have to deal with lots of data, pagination is a must-have ;-)
Have you tried splitting the dataset into smaller parts and process only one part at the time?
If you fetch the data from a disk file, you can use the fread() function to load smaller chunks, or some sort of unbuffered db query in case of database.
I haven't checked up PHP since v3.something, but you also could use a form of cloud computing. 1GB dataset seems to be big enough to be processed on multiple machines.
Given that you know that there are memory issues with your script that need fixing and you are only looking for short-term solutions, then I won't address the ways to go about profiling and solving your memory issues. It sounds like you're going to get to that.
So, I would say the main things you have to keep in mind are:
Total memory load on the system
OS capabilities
PHP is only one small component of the system. If you allow it to eat up a vast quantity of your RAM, then the other processes will suffer, which could in turn affect the script itself. Notably, if you are pulling a lot of data out of a database, then your DBMS might be require a lot of memory in order to create result sets for your queries. As a quick fix, you might want to identify any queries you are running and free the results as soon as possible to give yourself more memory for a long job run.
In terms of OS capabilities, you should keep in mind that 32-bit systems, which you are likely running on, can only address up to 4GB of RAM without special handling. Often the limit can be much less depending on how it's used. Some Windows chipsets and configurations can actually have less than 3GB available to the system, even with 4GB or more physically installed. You should check to see how much your system can address.
You say that you've increased the memory limit several times, so obviously this job is growing larger and larger in scope. If you're up to 1.5Gb, then even installing 2Gb more RAM sounds like it will just be a short reprieve.
Have others encountered this kind of
issue? and what were the solutions?
I think you probably already know that the only real solution is to break down and spend the time to optimize the script soon, or you'll end up with a job that will be too big to run.
Related
I am on a VPS server (Quad core with 768MB of RAM. I am running Apache and using APC as my default caching engine) . I get around 2000 visitors per day with a maximum of 100 concurrent visitors.
According to CakePHP DebugKit, my most memory intensive pages use around 16MB and the other pages average at around 10MB.
Is this memory usage normal?
Is it normal for my memory to bottleneck at 2000 visitors per page?
Should I consider upgrading my plan to 2GB RAM?
I also noticed that the View rendering is taking up most of the memory, around 70% on most pages.
I am monitoring my resource usage, when I have around 50 or more concurrent users, I am getting 0 free MB left.
Thank you
You should also check your other processes. From my experience, MySQL takes up more memory than anything else on any stack that I run. You should also implement better page caching so that PHP doesn't need to be touched when it isn't absolutely necessary. But Apache is also a memory hog that needs to be fine tuned. If you want to stick with Apache, then run Varnish in front of it.
Also, keep APC, but also add Memcached. It's much faster.
If your site has a spike-load that brings it to zero memory, then, if you can, consider launching extra instances of the server and doing a sort of round-robin (if the VPS is cloud-hosted and this is possible). If the load is constant, then definitely upgrade.
#burzum is completely right, however, that you should just switch to nginx. It's far, far better than Apache at this point. But, just to put you on the right track, quite a few people run nginx as a reverse proxy in front of Apache, and while that does speed up the server, it's entirely unnecessary because nginx can do pretty much anything you need it to do without Apache. Don't bother using Varnish in front of nginx either because nginx can act as its own reverse proxy.
Your best bet is to implement nginx with apcu (upgrade php to 5.5 if possible for better performance) and use memcached, and implement nginx's native microcaching abilities. If your site is heavy on read and light on write, then you might notice that everything is taken care of by nginx just retrieving a cached copy from memcache. While this helps with memory, it also helps with processing. My servers' CPUs usually have a 3-5% usage during peaks.
Is this memory usage normal?
Yes, it doesn't seem to be high for a normal CakePHP application.
Is it normal for my memory to bottleneck at 2000 visitors per page?
I guess yes, I'm not a "server guy".
Should I consider upgrading my plan to 2GB RAM?
I would try switching to Nginx first. Apache is a memory eating monster compared to Nginx, just search for a few banchmarks, a random one I've picked by a quick search is from this page.
Overall Nginx should provide you a much better performance. At some stage I would consider updating the memory but try Nginx first.
I am trying to figure out a situation where PHP is not consuming a lot of memory but instead causes a very high Committed_AS result.
Take this munin memory report for example:
As soon as I kick off our Laravel queue (10 ~ 30 workers), committed memory goes through the roof. We have 2G mem + 2G swap on this vps instance and so far there are about 600M unused memory (that's about 30% free).
If I understand Committed_AS correctly, it's meant to be a 99.9% guarantee no out of memory issue given current workload, and it seems to suggest we need to triple our vps memory just to be safe.
I tried to reduce the number of queues from 30 to around 10, but as you can see the green line is quite high.
As for the setup: Laravel 4.1 with PHP 5.5 opcache enabled. The upstart script we use spawn instance like following:
instance $N
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/laravel_queue.$N.pid --chuid $USER --chdir $HOME --exec /usr/bin/php artisan queue:listen -- --queue=$N --timeout=60 --delay=120 --sleep=30 --memory=32 --tries=3 >> /var/log/laravel_queue.$N.log 2>&1
I have seen a lot of cases when high swap use imply insufficient memory, but our swap usage is low, so I am not sure what troubleshooting step is appropriate here.
PS: we don't have this problem prior to Laravel 4.1 and our vps upgrade, here is an image to prove that.
Maybe I should rephrase my question as: how are Committed_AS calculated exactly and how does PHP factor into it?
Updated 2014.1.29:
I had a theory on this problem: since laravel queue worker actually use PHP sleep() when waiting for new job from queue (in my case beanstalkd), it would suggest the high Committed_AS estimation is due to the relatively low workload and relatively high memory consumption.
This make sense as I see Committed_AS ~= avg. memory usage / avg. workload. As PHP sleep() properly, little to no CPU are used; yet whatever memory it consumes is still reserved. Which result in server thinking: hey, you use so much memory (on average) even when load is minimal (on average), you should be better prepared for higher load (but in this case, higher load doesn't result in higher memory footprint)
If anyone would like to test this theory, I will be happy to award the bounty to them.
Two things you need to understand about Committed_AS,
It is an estimate
It alludes how much memory you would need in a worst case scenario (plus the swap). It is dependent on your server workload at the time. If you have a lower workload then the Committed_AS will be lower and vice versa.
If this wasn't an issue with the prior iteration of the framework queue and provided you haven't pushed any new code changes to the production environment, then you will want to compare the two iterations. Maybe spin up another box and run some tests. You can also profile the application with xdebug or zend_debugger to discover possible causal factors with the code itself. Another useful tool is strace.
All the best, you're going to need it!
I have recently found the root cause to this high committed memory problem: PHP 5.5 OPcache settings.
Turns out putting opcache.memory_consumption = 256 cause each PHP process to reserve much more virtual memory (can be seen at VIRT column in your top command), thus result in Munin estimating the potential committed memory to be much higher.
The number of laravel queues we have running in background only exaggerate the problem.
By putting opcache.memory_consumption to the recommended 128MB (we really weren't using all those 256MB effectively), we have cutted the estimating value in half, coupled with recent RAM upgrade on our server, the estimation is at around 3GB, much more reasonable and within our total RAM limit
Committed_AS is the actual size that the kernel has actually promised to processes. And queues runs independently and has nothing to do with PHP or Laravel. In addition to what Rijndael said, I recommend installing New Relic which can be used to find out the problem.
Tip: I've noticed a huge reduction in server-load with NginX-HHVM combination. Give it a try.
My PHP application on Windows+Apache has stopped with showing “Out of memory (allocated 422313984) (tried to allocate 45792935 bytes)”.
I can’t understand why it’s stopped because my machine has 4GB physical memory and I’ve set memory_limit directive for -1 in PHP.ini file. I’ve also restarted Apache.
I think 4GB is enough to allocate more than 422313984+45792935 byte memories.
Is there another setting to use memory for PHP or Apache?
I also summarize performance counter .It shows MAX memory usage was 2GB in total of machine. And the httpd process used 1.3GB.
I can’t show the code but actually the code fetches 30000 rows, 199 byte each, from DBMS and parsese into XML using simplexml_load_string() in a loop.
The code is normally finished if its data is small or shorten looping term like 30000 to 1000.
Another case is the first run after starting Apache will be succeeded.
I think some memory leak happen.
Actually I did echo PHP_INT_SIZE and PHP shows 4. So perhaps my PHP is 32-bit version.
If memory usage problem is from this version of PHP as Álvaro G. Vicario points at bellow, can I fix it by changing for 64-bit version of PHP? And how can I get to 64-bit version of PHP for Windows? I can’t find it in http://windows.php.net
«Out of memory» messages (not to be confused with «Allowed memory size exhausted» ones) always indicate that the PHP interpreter literally ran out of memory. There's no PHP or Apache setting you can tweak—the computer is just no able to feed PHP with more RAM. Common causes include:
Scripts that use too much memory.
Memory leaks or bugs in the PHP interpreter.
SimpleXML is a by no means a lightweight extension. On the contrary, its easy of use and handy features come at a cost: high resource consumption. Even without seeing a single line of code, I can assure that SimpleXML is totally unsuitable to create an XML file with 30,000 items. A PHP script that uses 2GB of RAM can only take down the whole server.
Nobody likes changing a base library in the middle of a project but you'll eventually need to do so. PHP provides a pull parser called XMLWriter. It's really not much harder to use and it provides two benefits:
It's way less resource intensive, since it doesn't create the complex object that SimpleXML uses.
You can flush partial results to file.
Can even write to file directly.
With it, I'm sure your 2 GB script can run with a few MB.
i have some problem with my VPS.
We are using a VPS to run our CMS and our websites, for now we have 300MB memory limit, and now we are close to reach the limit.
To maintain low cost(i know, increase memory is not to much expensive), but if i find a solution to optimize what we have, will be better.
What can i do?
Thanks!
I would suggest to increase memory - more memory, faster website :-)
But if speed is not important, reduce all cache sizes, set php memory_limit to 8M, disable opcode caching (APC, eAccelerator)
or try Raspberry Pi as server, now comes with 512MB :-)
I have a small 256MB vps which uses
apache + php + mariadb (mysqld)
I found memory was being gobbled by apache at every request 20MB at time for a simple wordpress page. It would settle but over time gobble into swap space and slow everything to a crawl. I suspect there are ways to fine tune mpm_event and mpm_worker to stop entering swap but I didn't figure out how.
When working in such tight environments it is important to know what is using what memory and reduce everything, so that swapping is minimized.
A summary of what I did follows, I managed to get 100MB physical headroom for tweaking, this is not a heavily loaded server but needs to be accessible (and cheap):
Using slackware (the os and default services seem to use less memory than debian, perhaps there is less loaded by default on my providers image)
Switch mysqld innodb tables off and use myisam
configure mysqld to reduce cache sizes if relevant
install cms fresh so they are definitely using myisam or alter all the tables
in apache choose to use the mod_mpm_prefork rather than mpm_event and mpm_worker ("apachectl -V" will tell you which is being used)
set sensible values for maxserver(start from low values for number of servers and requests and work up)
test the server under load whilst doing 'watch free' or 'top' through ssh
you should be able to see memory & the processes being created and destroyed as the load increases
adjust your httpd server settings and retest until happy
I like to see the memory max out at no more than 90% and reduce back down when the load is gone before I am convinced it will stay up without grinding to a halt (start using swap).
check your php.ini memory settings as stated in another answer.
I have also set a cron job to send me an email when swap starts to get used significantly, so I can restart something or even restart the whole server if I can't find what has used the memory, this should happen less and less the more you fine tune.
As stated this is not an environment that will perform well under heavy load but the cost may be more important to you.
Just my two pence worth...
I would look at Nginx like concerto49 recommended, if you have just one website on it also consider Litespeed (www.litespeedtech.com) they have a free version which may be sufficient to power your site.
If its PHP based, then strip out everything you're not using. Use APC/XCache to processing every request. Nginx also has a caching module, that could help so you can avoid hitting PHP for every request if its still fresh.
What type of VPS is it? OpenVZ? Xen? KVM? If it is OpenVZ does it have VSwap or Burst memory?
What type of CMS / Website are you running? Is it PHP based? Are you using Apache? If so have you tried nginx? I would look at optimizing the web server component and removing unused processes/applications to reduce memory and increase performance.
We have a website which had a previous memory limit of 12 MB (12 MB in php.ini, and 16 MB in settings.php) and worked previously.
After moving to a new server it started giving memory limit errors and displaying half-blank screen.
We increaded the limit in both files (php.ini and settings.php) and now it works, but I dont understand how is it possible that now it needs a considerably larger amount of memory (it used to work with 12 MB, now it cont work with less than 20 MB).
I assume you did not change the OS in the process. Moving from Windows to Linux or vice versa is quite likely to change resource usage.
And this is a long shot, but perhaps you moved from a 32-bit system to a 64-bit one? This would slightly increase memory usage as addresses (pointers) are twice as large on 64-bit architectures, and code with lots of small objects uses plenty of pointers.
On the whole though, we can't tell you much without seeing what changed about the system.
12 is too low, if you don't use only drupal as it is. Higher is recommend, than more modules you will install, usually 96MB is enough with image processing...
12 MB is really very low. I would tend to ignore it and go on.
Ideas what could have changed, though:
The old server could have had modules installed that reduced memory usage, e.g. memcache
The new server may have to rely on GD library for image processing, while the old server maybe had ImageMagick (which is an external tool and doesn't count towards the memory limit)