PHP memory issue - php

I set memory_limit to -1 . Still i am getting out of memory issues.
I am working with a legacy system, which is poorly coded ( :) ). I ran apache benchmark to check the concurrent user access to the system
ab -n2000 -c100 http://......com/
In the log file i see so many memory related issues.
In the code they use object buffering. This can be the issue ?. Is object buffering is related to memory_limit ?

Changing the memory limit on PHP stops it being killed when it goes past a certain value. However, it does NOT physically give your hardware more memory (or swap). Ultimately, if it needs memory which you don't physically have then things will break.

Object buffering in PHP : I don't know what it means, if you mean Output buffering with ob_start and ob_stop it is not related to object buffering and has not really an impact on memory usage of PHP.
Memory usage of PHP depends on the size of created objects while you build the response of the request. If you perform several times the same request the memory usage of each php execution should be the same.
With a 'no limit' on memory usage the only thing you do is avoiding a request crash because of too much memory usage. That mean if your problem is memory usage on your index page you can easily test it by setting some values in this setting, and decrease until it crash (64Mo, 32Mo, 16Mo, 8Mo, etc). You do not need ab for that.
Now, when you're using ab you make your apache server respond to several parallel requests. For each PHP request you have a new apache process created. And this new apache process will execute an independant PHP-thing, and it will take the same amount of memory as the others process doing the same thing (as you request the same page, and nothing is shared between different PHP execution, and each PHP execution is done in one apache process).
I assume you're using apache with mpm_prefork and mod_php, not any php-fpm or fastcgi php.
So If you have a memory problem in that situation it's maybe that you allow too much process for apache. By default it's 150, if each process takes 30Mb of RAM (check that with top) then it makes 30*150=4.3Go. See the problem?
3 easy solutions
decrease the number of apache process (MaxClients), and set the MinSpareServer, MaxSpareServer and StartServer to that same amount, you wont loose time creating and destroying apache processes.
limit the PHP application memory usage, then you'll be able to handle more process (well, not so easy, can be a long rewrite)
use APC, it decrease the memory usage (and speed up execution)
and after that the other solutions are more complex
use an apache in worker mode or nginx, and get php out of the webserver with php-fpm
use a proxy cache like varnish to catch requests that can be cached (pseudo static content), and avoid requesting apache & PHP too much.

Related

memory_get_peak_usage(true) PHP and Virtual Memory Size resource usage are very different

I have a PHP script which runs many http requests via curl - I use a variation on Rolling Curl (curl_multi) so that the requests can be run simultaneously. The script runs every few minutes with cron.
It is on a VPS and I received some 'Excessive resource usage' warnings from lfd (ConfigServer Security & Firewall) because the resource usage of the script went over the threshold of 512MB.
An example notification is:
Resource: Virtual Memory Size
Exceeded: 582 > 512 (MB)
Executable: /path/to/php-cgi
Command Line: /path/to/myscript.php
So I upped the threshold to 800MB and recorded the memory usage of the script using memory_get_peak_usage(true) every time the script runs.
However, the results from memory_get_peak_usage(true) are consistently 2MB... which is nowhere near the Virtual Memory usage as seen in the warning.
Note - only one instance of the script runs as multiple instances are prevented using flock.
So what am I missing?
Update - virtual memory usage also greater than php.ini memory_limit
After upping the threshold to 800MB I still get occasional notifications from lfd. I also checked the php.ini settings and memory_limit is set to 256MB - in theory the script wouldn't run if it was using more than this. From this I am guessing that either:
a) It is not PHP that is using the memory (could it be MySQL or CURL - is the memory used by these included in the value frommemory_get_peak_usage(true)?)
b) I'm not getting an accurate figure from lfd
Second Update - memory used by MySQL is not included in memory_get_peak_usage(true)
I suspect this is the issue - however I'm not sure what exactly CSF includes in resource figure. I will look into making the MySQL requests more efficient and see how that effects things.
PHP's memory_get_usage family of functions tracks the state of PHP's internal memory manager, which is responsible for all memory directly used by things like PHP variables. This is also what is limited by the memory_limit setting - after the limit is reached, the internal memory manager will refuse to claim more RAM from the OS.
However, not all the RAM used by your process is allocated via that memory manager. For instance, third party libraries like CURL or the MySQL connection library will have memory allocation completely independent of PHP, because they are effectively separate programs being executed by the PHP engine. Another example is that opening a file might cause it to be mapped into memory by the OS kernel, even if you never read its contents into a PHP variable.
This talk by Julien Pauli at PHP UK a few years ago goes into more details on the different types of memory usage, and how to measure them.

Drawbacks of increasing PHP memory limit

Recently my app started throwing fatal errors regarding exhaustion of the max allowed memory. After some research, I found out that the limit is set in the .htaccess file and it sets to 64m.
The tried allocation is of around 80MB, and we are able to provide these resources, but I wanted to ask the community if increasing the value of this variable is a good solution to the problem.
Thank you very much
The answer depends on what your script is doing :) It sounds like the increase in your case was quite small (from 64 to 80 MB, I'd recommend sticking to the powers of two though and upping it to 128MB btw.) so it shouldn't make much of a difference for modern machines. But to know if it was the right thing to do you need to find out why you needed more memory.
If your processing simply requires more memory (eg. you're processing uploaded files in memory, or decoding big json or xml structures or doing something else which is memory intensive) upping your limit is ok and common.
If however your application has memory leaks or is written in an inefficient way then upping the memory limit is just masking the problem and not solving it. You'll likely keep running into this issue and end up upping the memory all the time which is not feasible.
If you don't know what caused the sudden increase in memory consumption I'd recommend profiling your application using e.g. xhprof. You can also look at the last few changes to your app and see what might have caused it. If you can justify it then give your script more memory, otherwise try optimising your code first.
PHP is typically deployed in a way that a PHP process serves multiple request after another. During a single request a script can now allocate memory. At the end this memory will be free'd. So far so good. Now most operating systems are built in a way to keep memory, which was allocated bound to a process, even when freed. The assumption there is that a programmed which required memory once will need the amount again and it's cheaper to keep it available to the process than taking it back. Thus in a PHP deployment it might happen, that one request takes a lot of memory and then the memory is bound to the process and not available to the system anymore. Additionally it's a possible indication for a bug if some process takes a lot more memory than anticipated. So for those two things the memory_limit serves as safety net.
If your application needs more memory it's generally fine to increase the limit. The absolute maximum value is dependant on the system (available RAM / number of worker processes might be a rough formula, rough as it doesn't include other memory needed) Typically you should only increase by an amount needed.
Of course when changing this you have to remember when moving to other systems. Also typically less memory usage means faster execution thus you should try to see if you can optimise your code.
Side note: I purposely simplified the memory model above, ignoring virtual memory pages and how operating systems optimize there

php5-fpm children and requests

I have a question.
I own a 128mb vps with a simple blog that gets just a hundred hits per day.
I have nginx + php5-fpm installed. Considering the low visits and the ram I decided to set fpm to static with 1 server running. While I was doing my random tests like running php scripts through http that last over 30 minutes I tried to open the blog in the same machine and noticed that the site was basically unreachable. So I went to the configuration and read this:
The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes to be created when pm is set to 'dynamic'.
; **This value sets the limit on the number of simultaneous requests that will be
; served**
What shocked me the most was that I didn't know because I always assumed that a php children would handle hundreds of requests at the same time like a http server would do!
Did it get it right?
If for example I launch 2 php-fpm children and launch 2 "long scripts" at the same time all the sites using the same php backend will be unreachable?? How is this usable?
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
You've basically got it right. You configured a static number of workers (and that number was "one") -- so that's exactly what you got.
But you don't understand quite how things typically work, since you say:
I always assumed that a php children would handle hundreds of requests
at the same time like a http server would do!
I'm not really familiar with nginx, but consider the typical mod_php setup in apache. If you're using mod_php, then you're using the prefork mpm for apache. So every concurrent http requests is handled by a distinct httpd process (no threads). If you're tuning your apache/mod_php server for low-memory, you're going to have to tweak apache settings to limit the number of processes it will spawn (in particular, MaxClients).
Failing to tune this stuff means that when you get a large traffic spike, apache starts spawning a huge number of heavy processes (remember, it's mod_php, so you have the whole PHP interpreter embedded in each httpd process), and you run out of memory, and then everything starts swapping, and your server starts emitting smoke.
Tuned properly (meaning: tuned so that you ignore requests instead of allocating memory you don't have for more processes), clients will time out, but when traffic subsides, things go back to normal.
Compare that with fpm, and a smarter web server architecture like apache-worker, or nginx. Now you have some, much larger, pool of threads (still configurable!) to handle http requests, and a separate pool of php-fpm processes to handle just the requests that require PHP. It's basically the same thing, if you don't set limits on how many processes/threads can be created, you are asking for trouble. But if you do tune, you come out ahead, since only a fraction of your requests use PHP. So essentially, the average amount of memory needed per http requests is lower -- thus you can handle more requests with the same amount of memory.
But setting the number to "1" is too extreme. At "1", it doesn't even matter if you choose static or dynamic, since either way you'll just have one php-fpm process.
So, to try to give explicit answers to particular questions:
You may think: -duh! a php script (web page) is usually processed in 100ms- ... no doubt about that but what happens if you have pages that could run for about 10 secs each and I have 10 visitors with php-fpm with 5 servers so accepting only 5 requests per time at the same time? They'll all be queued or will experience timeouts?
Yes, they'll all queue, and eventually timeout. The fact that you regularly have scripts that take 10 seconds to run is the real culprit here, though. There are lots of ways to architect around that (caching, work queues, etc), but the right solution depends entirely on what you're trying to do.
I'm honestly used to run sites in Windows with Apache and mod_php I never experienced these issues because apparently those limits don't apply being a different way of using PHP.
They do apply. You can set up an apache/mod_php server the same way as you have with nginx/php-fpm -- just set apache's MaxClients to 1!
This also raises another question. If I have file_1.php with sleep(20) and file_2.php with just an echo, if I run file_1 and then file_2 with the fastcgi machine the second file will request the creation of another server to handle the php request using 4MB RAM more. If I do the same with apache/mod_php the second file will only use 30KB more of RAM (in the apache server). Considering this why is mod_php is considering the "bad guy" if the ram used is actually less...I know I'm missing the big picture here.
Especially on linux, lots of things that report memory usage can be very misleading. But think about it this way: that 30kb is negligible. That's because most of PHP's memory was already allocated when some httpd process got started.
128MB VPS is pretty tight, but should be able to handle more than one php-process.
If you want to optimize, do something like this:
For PHP:
pm = static
pm.max_children=4
for nginx, figure out how to control processes and thread count (whatever the equivalent to apache's MaxClients, StartServers, MinSpareServers, MaxSpareServers)
Then figure out how to generate some realistic load (apachebench, siege, jmeter, etc). use vmstat, free, and top to watch your memory usage. Adjust pm.max_children and the nginx stuff to be as high as possible without causing any significant swap (according to vmstat)

PHP Application performance

First I want to say that I'm using Drupal as CMS and I know that there is separate Drupal stackexchange site. But my problem is not Drupal specific, it's not in User or Advanced User level. It's PHP and Server related. OK now problem.
I have developed website which is not launched yet. Am getting out of memory errors random times. And sometimes server gets crashed. Helps rebooting. There is no other people using App so no heavy load. Particulary am exceeding privmmpages limit. I have tried some general things - increasing/decreasing PHP memory limit, looking in error logs, logging slow MySQL queries. Nothing... Same.
I have ran 'top' linux command. There is 4-5 apache processes depending on browser requests. Which MEM usage(%) are 10, 5, 4, 3, 0.5. two processes are running >10hr.
After restarting apache I got +40% free memory.
Here some questions and mysts for me.
Why that two processes are running so long when there is no active request from browser? And how can I prevent them?
Why I got +40% free memory after restarting when I had 10+5+4+3+0.5 memory used by apache? This should not be equal?
Can this be a memory leak? How can I detect them?
What techniques I should use to step down from higher levels to low levels? Imagine I have 'memory leak' in one of my function, how should I get him in whole application?
How can I benchmark my particulary functions for memory and CPU usage?
Why server is crashing? Even basic httpd restart is returning "fork: Cannot allocate memory". Can this be a symptom of memory leak?
Please answer point by point.
Sounds like you may have an infinite loop somewhere or your not releasing resources when dealing with things such as GD.
Linux keeps things in RAM while there is free ram, if there is a sudden need from another process for RAM, and the ram is not being held in use, Linux will free/swap it for the application in need. Check the output of "free" and you will notice a cached column that indicates how much is just cached and can be released at any time.

up to date chatroom!y

I am looking to create an ajax powered chatroom for my website.
I have used yshout and it seems very good but crashes when there are too many connections.
What is the best way to go about doing this using the minimum resources possible?
Probably one of the following::
Exceeding the number available threads. Depending on your configuration, you'll have a limit to how many requests can be simultaneously served. Since yshout will be maintaining open connections for longer than normal requests, you're far more likely to exhaust your thread/process limit. See the relevant Apache documentation for more info (assuming Apache, of course).
Exceeding PHP memory limits. For the reasons above, you're likely to need more memory to handle the multiple long running HTTP requests. Try throwing more memory in your server and bumping the PHPs memory limit.

Categories