I am creating a web application which will support more than 2000 users.
But 2000 concurrent connection will create problem in apache.
So, I googled and found a way that we can create HTTP request queue on server and handle them one by one.
But how should I achieve it using apache and PHP.
I suggest using NGINX or another event-driven server, as it will do what you're wanting without re-creating the wheel by building an HTTP request queue. Of course, if you really want to scale properly, you may think about a load balancer with more than one web server behind it. 2000 concurrent connections is quite large, and really isn't necessary for 2000 users, as not all users will be sending requests simultaneously. The "connection" only last as long as it takes to serve up a page.
You can also use Apache Benchmark (http://httpd.apache.org/docs/2.2/programs/ab.html) to do some quick, preliminary load testing. I believe you'll find that you don't need near the resources you think you do.
we use nginx and php-fpm as our game server.
we want to make sure requests from one player are processed one by one.
Then multi-threading bugs are reduced greatly in our game.
we do not know how to config nginx that way.
thank you.
The web-server itself cannot be run in single-thread mode as far as I know.
I think there is a solution to this problem. You need a queue to process a player's requests.
There are two options to create a thread-safe queue.
One is to write an interface to a thread-safe queue application which resides on the server's memory for PHP. PHP can simply add requests to this thread-safe app and then the app can run them in order.
OR You can simply store requests in a database (as they support simultaneous insertion) and then run a program which reads the requests from the db and executes them in order.
However this will add overhead to execution process.
I have a Lighttpd(1.4.28) web server running on Centos 5.3 and PHP 5.3.6 in fastcgi mode.
The server itself is a quad core with 1gb ram and is used to record viewing statistics for a video platform.
Each request consists of a very small bit of xml being posted and the receiving php script performs a simple INSERT or UPDATE mysql query. The php returns a very small response to acknowledge the request.
These requests are performed very frequently and i need the system to be able to handle as many concurrent connections as possible at a high rate of requests/second.
I have disabled keep alive as only single requests will be made and so I don't need to keep connections open.
One of the things that concern me is that in server-status I am seeing a lot of connections in the 'read' state. I take it this is controlled by server.max-read-idle which is set to 60 by default? Is it ok to change this to something like 5 as I am seeing the majority of connections being kept open for long periods of time.
Also what else can I do to optimise lighttpd to be able to server lots of small requests
This is my first experience setting up lighttpd as I thought it would be more suitable than apache in this case.
Thanks
Irfan
I believe the problem is not in the webserver, but in your PHP application, especially in MySQL part.
I would replace lighty with apache + mod_php, and mysql with some NoSQL such Redis, which will queue the INSERT requests to the database. Then I would write a daemon / crontab that insert the data in MySQL.
We had such thing before, but instead of Redis, we created TXT files in one directory.
I have a web service running written in PHP-MYSQL. The script involves fetching data from other websites like wikipedia,google etc. The average execution time for a script is 5 secs(Currently running on 1 server). Now I have been asked to scale the system to handle 60requests/second. Which of the approach should I follow.
-Split functionality between servers (I create 1 server to fetch data from wikipedia, another to fetch from google etc and a main server.)
-Split load between servers (I create one main server which round robin the request entirely to its child servers with each child processing one complete request. What about MYSQL database sharing between child servers here?)
I'm not sure what you would really gain by splitting the functionality between servers (option #1). You can use Apache's mod_proxy_balancer to accomplish your second option. It has a few different algorithms to determine which server would be most likely to be able to handle the request.
http://httpd.apache.org/docs/2.1/mod/mod_proxy_balancer.html
Apache/PHP should be able to handle multiple requests concurrently by itself. You just need to make sure you have enough memory and configure Apache correctly.
Your script is not a server it's acting as a client when it makes requests to other sites. The rest of the time its merely a component of your server.
Yes, running multiple clients (instances of your script - you don't need more hardware) concurrently will be much faster than running the sequentially, however if you need to fetch the data synchronously with the incoming request to your script, then coordinating the results of the seperate instances will be difficult - instead you might take a look at the curl_multi* functions which allow you to batch up several requests and run them concurrently from a single PHP thread.
Alternately, if you know in advance what the incoming request to your webservice will be, then you should think about implementing scheduling and caching of the fetches so they are already available when the request arrives.
can you tell me how server handles different http request at a time. If 10 users logged in a site and send request for a page at the same time what will happen?
Usually, each of the users sends a HTTP request for the page. The server receives the requests and delegates them to different workers (processes or threads).
Depending on the URL given, the server reads a file and sends it back to the user. If the file is a dynamic file such as a PHP file, the file is executed before it's send back to the user.
Once the requested file has been sent back, the server usually closes the connection after a few seconds.
For more, see: HowStuffWorks Web Servers
HTTP uses TCP which is a connection-based protocol. That is, clients establish a TCP connection while they're communicating with the server.
Multiple clients are allowed to connect to the same destination port on the same destination machine at the same time. The server just opens up multiple simultaneous connections.
Apache (and most other HTTP servers) have a multi-processing module (MPM). This is responsible for allocating Apache threads/processes to handle connections. These processes or threads can then run in parallel on their own connection, without blocking each other. Apache's MPM also tends to keep open "spare" threads or processes even when no connections are open, which helps speed up subsequent requests.
The program ab (short for ApacheBench) which comes with Apache lets you test what happens when you open up multiple connections to your HTTP server at once.
Apache's configuration files will normally set a limit for the number of simultaneous connections it will accept. This will be set to a reasonable number, such that during normal operation this limit should never be reached.
Note too that the HTTP protocol (from version 1.1) allows for a connection to be kept open, so that the client can make multiple HTTP requests before closing the connection, potentially reducing the number of simultaneous connections they need to make.
More on Apache's MPMs:
Apache itself can use a number of different multi-processing modules (MPMs). Apache 1.x normally used a module called "prefork", which creates a number of Apache processes in advance, so that incoming connections can often be sent to an existing process. This is as I described above.
Apache 2.x normally uses an MPM called "worker", which uses multithreading (running multiple execution threads within a single process) to achieve the same thing. The advantage of multithreading over separate processes is that threading is a lot more light-weight compared to opening separate processes, and may even use a bit less memory. It's very fast.
The disadvantage of multithreading is you can't run things like mod_php. When you're multithreading, all your add-in libraries need to be "thread-safe" - that is, they need to be aware of running in a multithreaded environment. It's harder to write a multi-threaded application. Because threads within a process share some memory/resources between them, this can easily create race condition bugs where threads read or write to memory when another thread is in the process of writing to it. Getting around this requires techniques such as locking. Many of PHP's built-in libraries are not thread-safe, so those wishing to use mod_php cannot use Apache's "worker" MPM.
Apache 2 has two different modes of operation. One is running as a threaded server the other is using a mode called "prefork" (multiple processes).
The requests will be processed simultaneously, to the best ability of the HTTP daemon.
Typically, the HTTP daemon will spawn either several processes or several threads and each one will handle one client request. The server may keep spare threads/processes so that when a client makes a request, it doesn't have to wait for the thread/process to be created. Each thread/process may be mapped to a different processor or core so that they can be processed more quickly. In most circumstances, however, what holds the requests is network I/O, not lack of raw computing, so there is frequently no slowdown from having a number of processors/cores significantly lower than the number of requests handled at one time.
The server (apache) is multi-threaded, meaning it can run multiple programs at once. A few years ago, a single CPU could switch back and forth quickly between multiple threads, giving on the appearance that two things were happening at once. These days, computers have multiple processors, so the computer can actually run two threads of code simultaneously. That being said, threads aren't really mapped to processors in any simple way.
With that ability, a PHP program can be thought of as a single thread of execution. If two requests reach the server at the same time, two threads can be used to process the request simultaneously. They will probably both get about the same amount of CPU, so if they are doing the same thing, they will complete at approximately the same time.
One of the most common issues with multi-threading is "race conditions"-- where you two requests are doing the same thing ("racing" to do the same thing), if it is a single resource, one of them is going to win. If they both insert a record into the database, they can't both get the same id-- one of them will win. So you need to be careful when writing code to realize other requests are going on at the same time and may modify your database, write files or change globals.
That being said, the programming model allows you to mostly ignore this complexity.