how make nginx process requests sequently by ip or session? - php

we use nginx and php-fpm as our game server.
we want to make sure requests from one player are processed one by one.
Then multi-threading bugs are reduced greatly in our game.
we do not know how to config nginx that way.
thank you.

The web-server itself cannot be run in single-thread mode as far as I know.
I think there is a solution to this problem. You need a queue to process a player's requests.
There are two options to create a thread-safe queue.
One is to write an interface to a thread-safe queue application which resides on the server's memory for PHP. PHP can simply add requests to this thread-safe app and then the app can run them in order.
OR You can simply store requests in a database (as they support simultaneous insertion) and then run a program which reads the requests from the db and executes them in order.
However this will add overhead to execution process.

Related

(having the process run until completion, after browser is closed)

I have a website, created using PHP and running on Apache. I want a subscriber to be able to log in and start a process on the server. They can then log out or close the browser without interrupting the process. Later they can log in and see the progress or see the results of the original process. What is the best way to accomplish this (having the process run until completion, after the browser is closed)?
Just looking for someone to point me in the right direction. A few people mentioned Gearman.
Gearman would be an ideal candidate, and I would use it for exactly the purpose you describe. It has everything you need out of the box to meet your requirements ("background" a long running, CPU-bound process to another machine, e.g. video encoding).
There is a Gearman PHP library, but you can write your worker code in a different language if it's better suited to doing the work.
For reporting progress information, I recommend having the worker write to Redis or Memcached - some kind of temporary storage that your web server can also access.
Check out the simple PHP example on the Gearman site. For learning, I recommend setting up a lab environment that contains 3 separate VM's, one for your web server (the client), one for the Gearman job queue (the server) and another for processing jobs (the workers).

How to process multiple parallel requests from one client to one PHP script

I have a webpage that when users go to it, multiple (10-20) Ajax requests are instantly made to a single PHP script, which depending on the parameters in the request, returns a different report with highly aggregated data.
The problem is that a lot of the reports require heavy SQL calls to get the necessary data, and in some cases, a report can take several seconds to load.
As a result, because one client is sending multiple requests to the same PHP script, you end up seeing the reports slowly load on the page one at a time. In other words, the generating of the reports is not done in parallel, and thus causes the page to take a while to fully load.
Is there any way to get around this in PHP and make it possible for all the requests from a single client to a single PHP script to be processed in parallel so that the page and all its reports can be loaded faster?
Thank you.
As far as I know, it is possible to do multi-threading in PHP.
Have a look at pthreads extension.
What you could do is make the report generation part/function of the script to be executed in parallel. This will make sure that each function is executed in a thread of its own and will retrieve your results much sooner. Also, set the maximum number of concurrent threads <= 10 so that it doesn't become a resource hog.
Here is a basic tutorial to get you started with pthreads.
And a few more examples which could be of help (Notably the SQLWorker example in your case)
Server setup
This is more of a server configuration issue and depends on how PHP is installed on your system: If you use php-fpm you have to increase the pm.max_children option. If you use PHP via (F)CGI you have to configure the webserver itself to use more children.
Database
You also have to make sure that your database server allows that many concurrent processes to run. It won’t do any good if you have enough PHP processes running but half of them have to wait for the database to notice them.
In MySQL, for example, the setting for that is max_connections.
Browser limitations
Another problem you’re facing is that browsers won’t do 10-20 parallel requests to the same hosts. It depends on the browser, but to my knowledge modern browsers will only open 2-6 connections to the same host (domain) simultaneously. So any more requests will just get queued, regardless of server configuration.
Alternatives
If you use MySQL, you could try to merge all your calls into one request and use parallel SQL queries using mysqli::poll().
If that’s not possible you could try calling child processes or forking within your PHP script.
Of course PHP can execute multiple requests in parallel, if it uses a Web Server like Apache or Nginx. PHP dev server is single threaded, but this should ony be used for dev anyway. If you are using php's file sessions however, access to the session is serialized. I.e. only one script can have the session file open at any time. Solution: Fetch information from the session at script start, then close the session.

Is threading in PHP efficient or not?

I'm writing a php code for a web server where it's required to do some heavy duty processes when requested before returning the results to the users.
My question is: does the apache server creates a separate thread/process for each client or should I use multi-threading to separate them?
The processes include calling the execution of other applications through cmd and downloading files to the server.
Well every request to the web server is a separate process which will try to use a free core from the CPU, and if there isn't a free one currently, it will go on a que and wait.
You can't have multithreading in php with apache within a single web request. You simply can't. Usually at each request apache forks a new O.S. process.
This is configurable, but typically chosen when working with php, since many methods of php standard library are not thread safe.
When I had to handle heavy computation I always choose to make the user request asynchronous, and let a third-process daemon to do the actual computation in background. In this case, after the user request, I let the client to poll the daemon (through others web-requests) to know when the computation is done.

How does nginx handle long running requests like file downloads?

From my limited understanding of nginx I know that nginx seperates itself from Apache by using a single thread that handles all requests instead of Apache which throws threads at the problem. In theory with a bunch of small requests its faster. But what about long running requests.
Lets say a user is downloading a large file or there's some long running PHP script that's slow because of something its depending on (disk IO, database) is slow. With Apache everything has its own thread so while PHP is waiting for a response from the database another request can come in and be simultaneously processed. With nginx however, wouldn't something like that lock the thread and therefor the whole server? I know that you can have multiple nginx processes but creating more processes for just file downloads just seems like trying to recreate Apache.
I know I'm missing something here as nginx handles situations like this, but what? How does nginx do this with its threading model?
And before you say it, this isn't a duplicate of this question as it only talks about incoming connections
Worker processes in nginx can handle multiple incoming and outgoing requests simultaneously. The answer to the question you linked (3436808) is also applicable to this question.

load balancing in php

I have a web service running written in PHP-MYSQL. The script involves fetching data from other websites like wikipedia,google etc. The average execution time for a script is 5 secs(Currently running on 1 server). Now I have been asked to scale the system to handle 60requests/second. Which of the approach should I follow.
-Split functionality between servers (I create 1 server to fetch data from wikipedia, another to fetch from google etc and a main server.)
-Split load between servers (I create one main server which round robin the request entirely to its child servers with each child processing one complete request. What about MYSQL database sharing between child servers here?)
I'm not sure what you would really gain by splitting the functionality between servers (option #1). You can use Apache's mod_proxy_balancer to accomplish your second option. It has a few different algorithms to determine which server would be most likely to be able to handle the request.
http://httpd.apache.org/docs/2.1/mod/mod_proxy_balancer.html
Apache/PHP should be able to handle multiple requests concurrently by itself. You just need to make sure you have enough memory and configure Apache correctly.
Your script is not a server it's acting as a client when it makes requests to other sites. The rest of the time its merely a component of your server.
Yes, running multiple clients (instances of your script - you don't need more hardware) concurrently will be much faster than running the sequentially, however if you need to fetch the data synchronously with the incoming request to your script, then coordinating the results of the seperate instances will be difficult - instead you might take a look at the curl_multi* functions which allow you to batch up several requests and run them concurrently from a single PHP thread.
Alternately, if you know in advance what the incoming request to your webservice will be, then you should think about implementing scheduling and caching of the fetches so they are already available when the request arrives.

Categories