up to date chatroom!y - php

I am looking to create an ajax powered chatroom for my website.
I have used yshout and it seems very good but crashes when there are too many connections.
What is the best way to go about doing this using the minimum resources possible?

Probably one of the following::
Exceeding the number available threads. Depending on your configuration, you'll have a limit to how many requests can be simultaneously served. Since yshout will be maintaining open connections for longer than normal requests, you're far more likely to exhaust your thread/process limit. See the relevant Apache documentation for more info (assuming Apache, of course).
Exceeding PHP memory limits. For the reasons above, you're likely to need more memory to handle the multiple long running HTTP requests. Try throwing more memory in your server and bumping the PHPs memory limit.

Related

Apache server slow when high HTTP API call

I am running HTTP API which should be called more than 30,000 time per minute simultaneously.
Currently I can call it 1,200 time per minute. If I call 1200 time per minute, all the request are completed and get response immediately.
But if I called 12,000 time per minute simultaneously it take 10 minute to complete all the request. And during that 10 minute, I cannot browse any webpage on the server. It is very slow
I am running CentOS 7
Server Specification
Intel® Xeon® E5-1650 v3 Hexa-Core Haswell,
RAM 256 GB DDR4 ECC RAM,
Hard Drive2 x 480 GB SSD(Software-RAID 1),
Connection 1 Gbit/s
API- simple php script that echo the time-stamp
echo time();
I check the top command, there is no load in the server
please help me on it
Thanks
Sounds like a congestion problem.
It doesn't matter how quick your script/page handling is, if the next request gets done within the execution time of the previous:
It is going to use resources (cpu, ram, disk, network traffic and connections).
And make everything parallel to it slower.
There are multiple things you could do, but you need to figure out what exactly the problem is for your setup and decide if the measure produces the desired result.
If the core problem is that resources get hogged by parallel processes, you could lower connection limits so more connections go in to wait mode, which keeps more resources available for actually handing out a page instead of congesting everything even more.
Take a look at this:
http://oxpedia.org/wiki/index.php?title=Tune_apache2_for_more_concurrent_connections
If the server accepts connections quicker then it can handle them, you are going to have a problem which ever you change. It should start dropping connections at some point. If you cram down French baguettes down its throat quicker then it can open its mouth, it is going to suffocate either way.
If the system gets overwhelmed at the network side of things (transfer speed limit, maximum possible of concurent connections for the OS etc etc) then you should consider using a load balancer. Only after the loadbalancer confirms the server has the capacity to actually take care of the page request it will send the user further.
This usually works well when you do any kind of processing which slows down page loading (server side code execution, large volumes of data etc).
Optimise performance
There are many ways to execute PHP code on a webserver and I assume you use appache. I am no expert, but there are modes like CGI and FastCGI for example. Which can greatly enhance execution speed. And tweaking settings connected to these can also show you what is happening. It could for example be that you use to little number of PHP threats to handle that number of concurrent connections.
Have a look at something like this for example
http://blog.layershift.com/which-php-mode-apache-vs-cgi-vs-fastcgi/
There is no 'best fit for all' solution here. To fix it, you need to figure out what the bottle neck for the server is. And act accordingly.
12000 Calls per minute == 200 calls a second.
You could limit your test case to a multitude of those 200 and increase/decrease it while changing settings. Your goal is to dish that number of requestst out in a shortest amount of time as possible, thus ensuring the congestion never occurs.
That said: consequences.
When you are going to implement changes to optimise the maximum number of page loads you want to achieve you are inadvertently going to introduce other conditions. For example if maximum ram usage by Apache would be the problem, the upping that limit will ensure better performance, but heightens the chance the OS runs out of memory when other processes also want to claim more memory.
Adding a load balancer adds another possible layer of failure and possible slow downs. Yes you prevent congestion, but is it worth the slow down caused by the rerouting?
Upping performance will increase the load on the system, making it possible to accept more concurrent connections. So somewhere along the line a different bottle neck will pop up. High traffic on different processes could always end in said process crashing. Apache is a very well build web server, so it should in theories protect you against said problem, however tweaking settings wrongly could still cause crashes.
So experiment with care and test before you use it live.

Laravel4 memory consumption concerns

Case
Currently I am developing an application using Laravel 4. I installed profiler to see the stats about my app. This is the screenshot:
Questions
You can see that it consumes 12.25 MB memory for each request (very simple page) in my vagrant (Ubuntu 64 bit + Nginx + PHP 5.3.10+ MySQL). Do you think this is too much ? This means If I have 100 concurrent connections, the memory consumption will be about 1 GB. I think this is too much but what do you think ?
It loads 237 files for each request. Do you think this is too much ?
When I deploy this app to the my server (Centos 6.4 with Apache + PHP 5.5.3 with Zend OPcache + MySQL) the memory consumption decreases dramatically. This is the screenshot from the server:
What do you think about this difference between my mac and the server ?
No, you don't really need to worry about this.
12MB is not really a large amount for a PHP program. And 100 concurrent connections is a lot.
To put it into context, assume your PHP page takes half a second to run, that would mean you'd need to have 12000 page loads per minute to achieve a consistent 100 concurrent connections. That's a lot more traffic than any of my sites get, I can tell you that.
Of course, if your page takes longer than half a second to load, this number will come down quickly, and your 100 concurrent connections can become a possibility much more easily.
This is one reason why it's a really good idea to focus on performance‡ -- the quicker your program can finish running, the quicker it can free up its memory for the next visitor. In fact unless you have a really major memory usage problem (which you don't), performance is probably more important in this context than the amount of memory being used.
In any case, if you do have 100 concurrent connections, you're likely to get issues with your server software before you have them with PHP. Apache has a default limit to the max number of connections, and it is a lot lower than 100. (you can raise it, of course, but if you really are getting that kind of traffic, you'll likely be wanting more servers anyway)
As for the 12M memory usage, you're not really ever likely to get much less than that for a PHP program. PHP needs a chunk of memory just in order to run in the first place, and the framework will need a chunk too, so most of your 12M will be due to that. This means that although your small program may be using 12M, it does not follow that a larger program would use twice as much. So you probably don't need to worry too much about it.
If you do have high traffic, and performance issues as a result, there are various ways you can mitigate the problem. The main one is by using caching. PHP 5.5 comes with an OpCache module built-in, which will cache your programs for you so that it doesn't have to do all the bootstrap work such as loading all the files every time. For some systems, this can have a dramatic impact on performance.
There are also other layers of caching you can use, such as a server-level page cache like Varnish, which will cache your static pages so that PHP doesn't even need to be called if the page content hasn't changed.
(‡ of course there are other reasons for focussing on performance too, like keeping your visitors happy)

What can be causing an "exceeded process limit" error?

I launched a website about a week ago and I sent out an email blast to a mailing list telling everyone the website was live. Right after that the website went down and the general error log was flooded with "exceeded process limit" errors. Since then, I've tried to really clean up a lot of the code and minimize database connections. I will still see that error about once a day in the error log. What could be causing this error? I tried to call the web host and they said it had something to do with my code but couldn't point me in any direction as to what was wrong with the code or which page was causing the error. Can anyone give me any more information? Like for instance, what is a process and how many processes should I have?
Wow. Big question.
Obviously, your maxing out your apache child worker processes. To get a rough idea of how many you can create, use top to get the rough memory footprint of one http process. If you are using wordpress or another cms, it could easily be 50-100m each (if you're using the php module for apache). Then, assuming the machine is only used for web serving, take your total memory, subtract a chunk for OS use, then divide that by 100m (in this example). Thats the max worker processes you can have. Set it in your httpd.conf. Once you do this and restart apache, monitor top and make sure you don't start swapping memory. If you do, you have set too high a number of workers.
If there is any other stuff running like mysql servers, make space for that before you compute number of workers you can have. If this number is small, to roughly quote a great man 'you are gonna need a bigger boat'. Just kidding. You might see really high memory usage for a http process like over 100m. You can tweak your the max requests per child lower to shorten the life of a http process. This could help clean up bloated http workers.
Another area to look at is time response time for a request... how long does each request take? For a quick check, use firebug plugin for firefox and look at the 'net' tab to see how long it takes for your initial request to respond back (not images and such). If for some reason request are taking more than 1 or 2 seconds to respond, that's a big problem as you get sort of a log jam. The cause of this could be php code, or mysql queries taking too long to respond. To address this, make sure if you're using wordpress to use some good caching plugin to lower the stress on mysql.
Honestly, though, unless your just not utilizing memory by having too few workers, optimizing your apache isn't something easily addressed in a short post without detail on your server (memory, cpu count, etc..) and your httpd.conf settings.
Note: if you don't have server access you'll have a hard time figuring out memory usage.
The process limit is typically something enforced by shared webhost providers, and generally has to do with the number of processes executing under your account. This will typically equate to the number of connections made to your server at once (assuming one PHP process per each connection).
There are many factors that come into play. You should figure out what that limit is from your hosting provider, and then find a new one that can handle your load.

PHP memory issue

I set memory_limit to -1 . Still i am getting out of memory issues.
I am working with a legacy system, which is poorly coded ( :) ). I ran apache benchmark to check the concurrent user access to the system
ab -n2000 -c100 http://......com/
In the log file i see so many memory related issues.
In the code they use object buffering. This can be the issue ?. Is object buffering is related to memory_limit ?
Changing the memory limit on PHP stops it being killed when it goes past a certain value. However, it does NOT physically give your hardware more memory (or swap). Ultimately, if it needs memory which you don't physically have then things will break.
Object buffering in PHP : I don't know what it means, if you mean Output buffering with ob_start and ob_stop it is not related to object buffering and has not really an impact on memory usage of PHP.
Memory usage of PHP depends on the size of created objects while you build the response of the request. If you perform several times the same request the memory usage of each php execution should be the same.
With a 'no limit' on memory usage the only thing you do is avoiding a request crash because of too much memory usage. That mean if your problem is memory usage on your index page you can easily test it by setting some values in this setting, and decrease until it crash (64Mo, 32Mo, 16Mo, 8Mo, etc). You do not need ab for that.
Now, when you're using ab you make your apache server respond to several parallel requests. For each PHP request you have a new apache process created. And this new apache process will execute an independant PHP-thing, and it will take the same amount of memory as the others process doing the same thing (as you request the same page, and nothing is shared between different PHP execution, and each PHP execution is done in one apache process).
I assume you're using apache with mpm_prefork and mod_php, not any php-fpm or fastcgi php.
So If you have a memory problem in that situation it's maybe that you allow too much process for apache. By default it's 150, if each process takes 30Mb of RAM (check that with top) then it makes 30*150=4.3Go. See the problem?
3 easy solutions
decrease the number of apache process (MaxClients), and set the MinSpareServer, MaxSpareServer and StartServer to that same amount, you wont loose time creating and destroying apache processes.
limit the PHP application memory usage, then you'll be able to handle more process (well, not so easy, can be a long rewrite)
use APC, it decrease the memory usage (and speed up execution)
and after that the other solutions are more complex
use an apache in worker mode or nginx, and get php out of the webserver with php-fpm
use a proxy cache like varnish to catch requests that can be cached (pseudo static content), and avoid requesting apache & PHP too much.

How do I tell how much memory / resources is my php script using up?

I am debugging my application here and basically in a nutshell - the application is dying out on my online server or maybe its my server dying out. But I have checked this application three different servers and all exhibited similar results, the application would run for a while but all of a sudden once I'd be opening more and more requests I'd get a Network error or the site would fail to load.
I'm suspecting its my code here so I need to find out how I can make it less resource intensive infact I don't know why is it doing this in the first place. It runs ok on my localhost machine though.
Or is it because I'm hosting it on a technically shared host? Should I look for specialised hosting for hosting an application? There are a lot of complex database queries and ajax requests in my application here.
As far as checking how much memory your script is using you can periodically call memory_get_usage(true) at points in your code to identify which parts of your script are using the memory. memory_get_peak_usage(true) obviously returns the max amount of memory that was used.
You say your application runs OK for a while. Is this a single script which is running all this time, or many different page requests / visitors? There is usually a max_execution_time for each script (often default to 30 seconds). This can be changed in code on a per script basis by calling set_time_limit().
There is also an inherent memory_limit as set in php.ini. This could be 64M or lower on a shared host.
"...once I'd be opening more and more requests..." - There is a limit to the number of simultaneous (ajax) requests a client can make with the server. Browsers could be set at 8 or even less (this can be altered in Firefox via about:config). This is to prevent a single client from swamping the server with requests. A server could be configured to ban clients that open too many requests!
A shared host could be restrictive. However, providing the host isn't hosting too many sites then they can be quite powerful servers, giving you access to a lot of power for a short time. Emphasis on short time - it's in the interests of the host to control scripts that consume too many resources on a shared server as other customers would be affected.
Should I look for specialised hosting for hosting an application?
You'll have to be more specific. Most websites these days are 'applications'. If you are doing more than simply serving webpages and are constantly running intensive scripts that run for a period of time then you may need to go for dedicated hosting. Not just for your benefit, but for the benefit of others on the shared server!
The answer is probably the fact that your hosting company has a fairly restrictive php.ini configuration. They could, for example, limit the amount of time that a script can run for, or limit the amount of memory that a script could use.
What does your code attempt to do?
You might consider making use of memory_get_usage and/or memory_get_peak_usage.

Categories