SMF and PHP memory usage file upload - php

I'm helping out in a forum that runs on SMF. The site has been lagging recently and our host tells us it's the file uploads that clogs the servers memory and that SMF is using server memory in a non optimized way. There's probably one file upload every hour at most so the load isn't that high.
Any thoughts on this? I don't know php to the extent that i can argue against them.

If PHP is run as an Apache module, used memory will not always be returned when the PHP script ends. There are a couple of ways to fix this:
Use less memory in your script (obviously)
Run your script as CGI instead of as an Apache module (this way, the memory will be returned on script exit)
Restart Apache when the memory needs to be reclaimed. This is not really a good solution, but we do it at Levonline twice a day...
Upgrade your hosting to your own server, where you don't have to think about the hosting provider's other customers, and can use as much memory as you want.

Related

PHP Application performance

First I want to say that I'm using Drupal as CMS and I know that there is separate Drupal stackexchange site. But my problem is not Drupal specific, it's not in User or Advanced User level. It's PHP and Server related. OK now problem.
I have developed website which is not launched yet. Am getting out of memory errors random times. And sometimes server gets crashed. Helps rebooting. There is no other people using App so no heavy load. Particulary am exceeding privmmpages limit. I have tried some general things - increasing/decreasing PHP memory limit, looking in error logs, logging slow MySQL queries. Nothing... Same.
I have ran 'top' linux command. There is 4-5 apache processes depending on browser requests. Which MEM usage(%) are 10, 5, 4, 3, 0.5. two processes are running >10hr.
After restarting apache I got +40% free memory.
Here some questions and mysts for me.
Why that two processes are running so long when there is no active request from browser? And how can I prevent them?
Why I got +40% free memory after restarting when I had 10+5+4+3+0.5 memory used by apache? This should not be equal?
Can this be a memory leak? How can I detect them?
What techniques I should use to step down from higher levels to low levels? Imagine I have 'memory leak' in one of my function, how should I get him in whole application?
How can I benchmark my particulary functions for memory and CPU usage?
Why server is crashing? Even basic httpd restart is returning "fork: Cannot allocate memory". Can this be a symptom of memory leak?
Please answer point by point.
Sounds like you may have an infinite loop somewhere or your not releasing resources when dealing with things such as GD.
Linux keeps things in RAM while there is free ram, if there is a sudden need from another process for RAM, and the ram is not being held in use, Linux will free/swap it for the application in need. Check the output of "free" and you will notice a cached column that indicates how much is just cached and can be released at any time.

PHP Memory spikes on Production server

I've got 2 servers: my local server and remote production server. They've got basically the same config: Ubuntu 10.10, Apache 2, PHP 5.3, PHP-APC, MYsql etc. I also have copies of a webapp on both servers and here's the problem with PHP:
On my local server webapp uses only ~4 MB of memory, but on my production server memory usage spikes up to 50 MB of memory for no good reason. I tried to run memory_get_peak_usage() function to get memory usage at different stages of webapp execution and i've found that on production server memory spikes from 0.7 up to 49 MB on such function calls as class_exists().
What could be the problem?
Tanks.
Hate to sound a bore, but have you verified that they have exactly the same Apache/PHP config as they can easily become the source of these sort of differences..
Also are they experiencing the same sort of load, as code running on a server under load can behave very differently to code running with ample uncontested resources.
Are there any other differences in terms of other running applications that could be affecting stuff?
It maybe worth profiling the code on both the servers to see if there are any per-request differences, XHprof[1] is a great tool for this and it can safely be run in production (as long as you read the instructions)
[1] http://phpadvent.org/2010/profiling-with-xhgui-by-paul-reinheimer
Ok, i've found where was a problem. There is a class that was creating cache file containing information on user's browser (in order to recognize them later). Apparently there was a problem with that file and/or parser so it was using too much memory. Since then i've cleared cache files and if situation will repeat, i'll ditch that class altogether.
Thanks to all who answered/commented on problem.

PHP memory issue

I set memory_limit to -1 . Still i am getting out of memory issues.
I am working with a legacy system, which is poorly coded ( :) ). I ran apache benchmark to check the concurrent user access to the system
ab -n2000 -c100 http://......com/
In the log file i see so many memory related issues.
In the code they use object buffering. This can be the issue ?. Is object buffering is related to memory_limit ?
Changing the memory limit on PHP stops it being killed when it goes past a certain value. However, it does NOT physically give your hardware more memory (or swap). Ultimately, if it needs memory which you don't physically have then things will break.
Object buffering in PHP : I don't know what it means, if you mean Output buffering with ob_start and ob_stop it is not related to object buffering and has not really an impact on memory usage of PHP.
Memory usage of PHP depends on the size of created objects while you build the response of the request. If you perform several times the same request the memory usage of each php execution should be the same.
With a 'no limit' on memory usage the only thing you do is avoiding a request crash because of too much memory usage. That mean if your problem is memory usage on your index page you can easily test it by setting some values in this setting, and decrease until it crash (64Mo, 32Mo, 16Mo, 8Mo, etc). You do not need ab for that.
Now, when you're using ab you make your apache server respond to several parallel requests. For each PHP request you have a new apache process created. And this new apache process will execute an independant PHP-thing, and it will take the same amount of memory as the others process doing the same thing (as you request the same page, and nothing is shared between different PHP execution, and each PHP execution is done in one apache process).
I assume you're using apache with mpm_prefork and mod_php, not any php-fpm or fastcgi php.
So If you have a memory problem in that situation it's maybe that you allow too much process for apache. By default it's 150, if each process takes 30Mb of RAM (check that with top) then it makes 30*150=4.3Go. See the problem?
3 easy solutions
decrease the number of apache process (MaxClients), and set the MinSpareServer, MaxSpareServer and StartServer to that same amount, you wont loose time creating and destroying apache processes.
limit the PHP application memory usage, then you'll be able to handle more process (well, not so easy, can be a long rewrite)
use APC, it decrease the memory usage (and speed up execution)
and after that the other solutions are more complex
use an apache in worker mode or nginx, and get php out of the webserver with php-fpm
use a proxy cache like varnish to catch requests that can be cached (pseudo static content), and avoid requesting apache & PHP too much.

Any documentation about how Apache handles file uploads?

I've spent hours googling, as well as searching the Apache site, and I can't find any documentation about how Apache handles file uploads — particularly large ones. I've read anecdotal reports that PHP isn't involved until the upload is complete, which is what I'd expect. But as far as what Apache does during the upload, I can't find anything.
The reason I'm hot for documentation is that Apache is storing uploads entirely in memory, instead of streaming them to disk. httpd will use every byte of available memory on the server I'm using until it crashes. Typically the amount of physical memory consumed is 3x the size of the file being uploaded, and increases in the vicinity of 5 MB/s (nowhere near my upload speed).
I've tested this same request on another LAMP stack I'm using, and Apache memory usage doesn't change at all throughout the course of the upload.
Can anyone explain to me how Apache could handle the same upload so differently on two different servers? Any thought greatly appreciated.
Technically, PHP is handling the upload on behalf of Apache, and buffering the file in ram until it completes. However, your script will not gain control until after the upload completes (or aborts). Apache by itself won't buffer out to disk unless it has to. Think of it as an invisible "handle_upload()" function call that's transparently inserted as the very first thing in your script.
Back in the "everything is a cgi script" days when language interpreters like PHP weren't embedded in the webserver process, POST data was sent to the CGI script via standard input The file would pass through Apache directly to the CGI process and could be read byte-by-byte as it came in.
The answer is unsatisfying. I never found any documentation.
I continued poking around in the dark, finally stumbling on a mod_fcgid upgrade (from 2.2 to 2.3.6) that did the trick. Perhaps there was a bug in 2.2.
The memory usage still goes up in 2.3.6, but far less dramatically. Only a few megabytes for a ~100 MB file. (However, when the upload finishes and the file is moved, memory usage instantly shoots up ~100-200 MB, but then seems to be immediately released.)
This might help you, because the WAMP server has Apache in it.
http://www.wampserver.com/phorum/read.php?2,39439

jmeter multiple users problem

We are using Jmeter to test our Php application running on the Apache 2 web server. I can load up Jmeter to use 25 or 50 threads and the load on the server does not increase, however the response time from the server does. The more threads the slower the response time. It seems like Jmeter or Apache is queuing the requests. I have changed the maxclients value in apache web server configuration file, but this does not change the problem. While Jmeter is running I can use the application and get respectable response times. What gives? I would expect to be able to tax my server down to 0% idle by increase the number of threads. Can anyone help point me in the right direction?
Update: I found that if I remove sessions from my application I am able to simulate a full load on the server. I have tried to re-enable sessions and use an HTTP Cookie Manager for each thread, but it does not seem to make an impact.
You need to identify where the bottleneck is occurring, and then attempt to remediate the problem.
The JMeter client should be running on a well equipted machine. I prefer a Solaris/Unix server running the JVM, but for <200 threads, a modern windows machine will do just fine. JMeter can become a bottleneck, and you won't get any meaningful results once it does. Additionally, it should run on a separate machine to what your testing, and preferable on the same network. The WAN latency can become a problem if your test rig and server are far apart.
The second thing to check is your Apache workers. Apache has a module - mod_status - which will show you the state of every worker. It's possible to have your pool size set too low. From the mod_status, you'll be able to see how many workers are in use. To few, and Apache won't have any workers to process requests, and the requests will queue up. Too many, and Apache may exhaust the memory on the box it's running on.
Next, you should check your database. If it's on a separate machine, the database could have an IO or CPU shortage.
If your hitting a bottleneck, and the server and db are on the same machine, you'll generally hit a CPU, RAM, or IO limit. I listed those in the order in which they are easiest to identify. If you get a CPU bound app, you can easily see you CPU usage go to 100%. If you run out of RAM, your machine will start swapping. On both Windows and unix it's fairly easy to see your available free RAM. Lastly, you may be IO bound. This too can be monitored using various tools or stats, but it's not as obvious as CPU.
Lastly, specifically to your question, the one thing that stands out is it's possible to have a huge number of session files stored in a single directory. Often PHP stores session information in files. If this directory gets large, it will take increasingly long amount of time for PHP to find the session. If you ran your test will cookies turned off, the PHP app may have created thousands of session files for each user request. On a Windows server, it will slow down faster than on a unix server, do to differences in the way directories are stored on the two operating systems.
Are you using a constant throughput timer? If Jmeter can't service the throughput with the threads allocated to it, you'll see this queueing and blowouts in the response time. To figure out if this is the problem, try adding more threads.
I also found a report of this happening when there are javascript calls inside the script. In this instance, try to move javascript calls to the test plan element at the top of the script, or look for ways to pre-calculate the value.
Try checking a static file served by apache and not by PHP to see if the problem is in the Apache config or the PHP config.
Also check your network connections and configuration. Our JMeter testing was progressing nicely until it hit a wall. Eventually realized we only had a 100Mb connection and it was saturated, going to gigabit fixed it. Your network cards or switch may be running at a lower speed than you think, especially if their speed setting is "auto".

Categories