Laravel4 memory consumption concerns - php

Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?

No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)

Related

Optimal memory for page loading in hosted environment?

In PHP, shared hosting environment, what shall be an optimal memory consumption to load a page. My current PHP script is consuming 3,183,440 bytes of memory. What shall I consider a good memory usage, to entertain say, 10000 users parallely?
Please be detailed, as I am a novice in optimization part.
Thanks in advance
3MB isn't that bad - keep in mind that parts of PHP are shared, depending on which server is used (IIS, ngx, apache etc.) you can specify pools and clusters as well when having to scale up.
But the old adage testing is knowledge goes well here, try load tests on the site, concurrent 10 -> 100 -> 1000 connections and look at the performance metrics, it wil give you more insight on how much memory is required.
For comparison, the site I normally work on has an average of 300+ users concurrently online and the memory usage is just under 600MB, however I run certain processes locally it will easily use up 16MB.

Apache server slow when high HTTP API call

I am running HTTP API which should be called more than 30,000 time per minute simultaneously.
Currently I can call it 1,200 time per minute. If I call 1200 time per minute, all the request are completed and get response immediately.
But if I called 12,000 time per minute simultaneously it take 10 minute to complete all the request. And during that 10 minute, I cannot browse any webpage on the server. It is very slow
I am running CentOS 7
Server Specification
Intel® Xeon® E5-1650 v3 Hexa-Core Haswell,
RAM 256 GB DDR4 ECC RAM,
Hard Drive2 x 480 GB SSD(Software-RAID 1),
Connection 1 Gbit/s
API- simple php script that echo the time-stamp
echo time();
I check the top command, there is no load in the server
please help me on it
Thanks
Sounds like a congestion problem.
It doesn't matter how quick your script/page handling is, if the next request gets done within the execution time of the previous:
It is going to use resources (cpu, ram, disk, network traffic and connections).
And make everything parallel to it slower.
There are multiple things you could do, but you need to figure out what exactly the problem is for your setup and decide if the measure produces the desired result.
If the core problem is that resources get hogged by parallel processes, you could lower connection limits so more connections go in to wait mode, which keeps more resources available for actually handing out a page instead of congesting everything even more.
Take a look at this:
http://oxpedia.org/wiki/index.php?title=Tune_apache2_for_more_concurrent_connections
If the server accepts connections quicker then it can handle them, you are going to have a problem which ever you change. It should start dropping connections at some point. If you cram down French baguettes down its throat quicker then it can open its mouth, it is going to suffocate either way.
If the system gets overwhelmed at the network side of things (transfer speed limit, maximum possible of concurent connections for the OS etc etc) then you should consider using a load balancer. Only after the loadbalancer confirms the server has the capacity to actually take care of the page request it will send the user further.
This usually works well when you do any kind of processing which slows down page loading (server side code execution, large volumes of data etc).
Optimise performance
There are many ways to execute PHP code on a webserver and I assume you use appache. I am no expert, but there are modes like CGI and FastCGI for example. Which can greatly enhance execution speed. And tweaking settings connected to these can also show you what is happening. It could for example be that you use to little number of PHP threats to handle that number of concurrent connections.
Have a look at something like this for example
http://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/
There is no 'best fit for all' solution here. To fix it, you need to figure out what the bottle neck for the server is. And act accordingly.
12000 Calls per minute == 200 calls a second.
You could limit your test case to a multitude of those 200 and increase/decrease it while changing settings. Your goal is to dish that number of requestst out in a shortest amount of time as possible, thus ensuring the congestion never occurs.
That said: consequences.
When you are going to implement changes to optimise the maximum number of page loads you want to achieve you are inadvertently going to introduce other conditions. For example if maximum ram usage by Apache would be the problem, the upping that limit will ensure better performance, but heightens the chance the OS runs out of memory when other processes also want to claim more memory.
Adding a load balancer adds another possible layer of failure and possible slow downs. Yes you prevent congestion, but is it worth the slow down caused by the rerouting?
Upping performance will increase the load on the system, making it possible to accept more concurrent connections. So somewhere along the line a different bottle neck will pop up. High traffic on different processes could always end in said process crashing. Apache is a very well build web server, so it should in theories protect you against said problem, however tweaking settings wrongly could still cause crashes.
So experiment with care and test before you use it live.

Website under huge traffic PHP + MySQL

Let me show what problem I'm dealing with:
website powered by Apache 2.2 + PHP 5.x + MySQL 5.1.x
peak traffic = 2.000 unique visitors/min = 5-8k pageviews/min
normal traffic = 2.000 unique visitors/day
website works well while under normal traffic
website lags while under peak traffic
my server cpu load is pretty big while under peak traffic (because of mysql/php processes), so my website is lagging.
Normal state: server response in 0.1-0.4 sec/pageview. PHP code is optimized to get and process all data from database and output HTML code within this time (call it server-response).
Peak traffic state: server response in 2-5 sec/pageview. And that's a bit longer response than I'm happy with. I don't want my visitors to wait so long for requested page.
What I'm doing now: My way to deal with this problem now is local cache system. I'm making local cache file (stored on disk) for about 10 minutes with cached SQL results - so I don't have to call the same sql query with every request.
My website is http://www.lechaton.cz/.
Is there any better way how to deal with peak traffic or optimize CPU utilization?
Thanks all for your time and advise!
I've been working through whole weekend and testing and testing and comparing methods and solutions.
Nginx solution (replace LAMP with LNMP)
I'm still using Apache2.2, but even using nginx (thanks to Bondye) there is not a big difference.
I've tried LNMP on debian wheezy, but with not a big difference from LAMP.
For static files nginx is faster in fact.
Nginx with PHP-FPM is a twice faster, that could be solution for some cases, but not solving my issues for 100%.
Tune-up your MySQL settings and MySQL queries
Tune-up your MySQL server with better caching and buffering. Also check your max-connections and memory usage.
But the most important is to optimize your queries for best performance. Even if your queries are best performing with mysql-cache, bigger traffic brings your server down within several minutes of big traffic. YOU HAVE TO CACHE your output!!
Tune-up apache
tune up your apache mpm-prefork
Reduce KeepAliveTimeout to max 5 seconds (default=30)
keep maxClients set to correct number (depends on your RAM, max-processes and max-servers in settings directive)
Final conclusion
1) content cache = rule #1
I've found the best solution is: cache as much as much content as you can. There is no reason why every user should generate same output if there is option to display cached content. It's much faster and it saves your resources.
2) nginx for static content
You can use nginx to perform best with static files and content, there is much lower cpu load with multiple processes.
With PHP-FPM your code speeds up twice (maybe a bit more). But I can't consider it as final solution.
3) test your website with benchmark tools I've used siege and apache benchmark (ab) and mysqlslap.
These steps helped me to reduce CPU load with my brand new server, speed up my server-response and balance my peak-traffic during big events.
Hope someone will find it helpful.

PHP/MySql/Jquery script vs RAM

Hi guys I have a question about server's RAM and PHP/MySQL/Jquery script.
Can scripts kills RAM when script doesn't take extra RAM? (I know it could happen when RAM grow up to maximum or because of memory limit. But it isn't this case.)
I'm testing script but everytime when I do that RAM goes quickly down.
Script doesn't show error for memory limit and it's correctly loading all data. When I don't test script RAM is still down.
In database is a couple records - maybe 350 records in 9 tables (the bigges tables has 147 records).
(I haven't any logs just simply (really simple) graph for running server.)
Thank for your time.
If you're not getting errors in your PHP error log about failing to allocate memory, and you're not seeing other problems with your server running out of RAM (such as extreme performance degradation due to memory pages being written to disk for demand paging) you probably don't need to really worry about it. Any use case where a web server uses up that much memory in a single request is going to be pretty rare.
As for trying to profile the actual memory usage, trying to profile it by watching something like the task manager is going to be pretty unreliable. Most PHP scripts are going to complete in milliseconds, which isn't enough time for the memory allocations to really even register in the task manager.
Even if you have a more reliable method of profiling the memory usage (I don't recall if PHP has built in functions for this, but probably does), bear in mind that memory usage is going to flucuate tremendously for reasons that may be hard to understand. PHP in particular is very high level: you can open a database connection, which involves everything down to the OS opening network sockets, creating internal datastructures, caching things, and much more all in a single line of code. The script may allocate many megabytes of memory for such a thing for a single database row, but may then deallocate it a millisecond later.
Those database sizes are pretty neglibible. Depending on the row sizes it's possibly under a megabyte of data which is a tiny drop in a bucket for memory on anything remotely modern. Don't worry about memory usage for something like that. Only if you see your scripts failing and your error log reports running out of memory should you really worry about it.

PHP Application performance

First I want to say that I'm using Drupal as CMS and I know that there is separate Drupal stackexchange site. But my problem is not Drupal specific, it's not in User or Advanced User level. It's PHP and Server related. OK now problem.
I have developed website which is not launched yet. Am getting out of memory errors random times. And sometimes server gets crashed. Helps rebooting. There is no other people using App so no heavy load. Particulary am exceeding privmmpages limit. I have tried some general things - increasing/decreasing PHP memory limit, looking in error logs, logging slow MySQL queries. Nothing... Same.
I have ran 'top' linux command. There is 4-5 apache processes depending on browser requests. Which MEM usage(%) are 10, 5, 4, 3, 0.5. two processes are running >10hr.
After restarting apache I got +40% free memory.
Here some questions and mysts for me.
Why that two processes are running so long when there is no active request from browser? And how can I prevent them?
Why I got +40% free memory after restarting when I had 10+5+4+3+0.5 memory used by apache? This should not be equal?
Can this be a memory leak? How can I detect them?
What techniques I should use to step down from higher levels to low levels? Imagine I have 'memory leak' in one of my function, how should I get him in whole application?
How can I benchmark my particulary functions for memory and CPU usage?
Why server is crashing? Even basic httpd restart is returning "fork: Cannot allocate memory". Can this be a symptom of memory leak?
Please answer point by point.
Sounds like you may have an infinite loop somewhere or your not releasing resources when dealing with things such as GD.
Linux keeps things in RAM while there is free ram, if there is a sudden need from another process for RAM, and the ram is not being held in use, Linux will free/swap it for the application in need. Check the output of "free" and you will notice a cached column that indicates how much is just cached and can be released at any time.

Categories