ZeroMQ apache module - php

Does anyone have an idea if there is already a zmq module for apache? If there is please share the link or any reference. My scenario is as follows:
Server Config:
Apache 2.4.12, with prefork
PHP 5.5
ZMQ 4.0.X
My problem is, whenever I try to create a zmq socket(pub) connection from my application to a separate service (SUB) with a streamer device in between, it creates a new socket everytime the application is initialized as my apache is in prefork mode, creating new instance(child) on every request. How can I create a single context/socket where any number of PHP requests from subsequent apache child processes can send data to the socket, which will avoid the creation of multiple sockets and exhausting the system resources. This will also, I believe, reduce the overhead caused due to creation of new sockets and make it faster.
As an alternative is it possible to create an apache module, whose functions and resources I can access from PHP application and use it to just send data, where the context and socket are created only once, and are persistent, during apache load.

Short answer - you can't. Your problem in this is Apache and how it works - it shuts down the PHP process after request finishes. Also, you can't share a context or a socket created in an Apache process between PHP processes.
I don't know what you're trying to do or why you even exhaust system resources (quite odd), but if I were you I'd use a more advanced server that uses ZeroMQ internally for its transport layer: Mongrel2. You could create a PHP extension, serve PHP via FPM and then have Apache proxy requests to your PHP-FPM which can then pool the already existing ZMQ connections. However, I would expand the question with how the resources are exhausted that fast.
If that's all too much, then you can consider this:
PHP processes spawned by Apache accept the data and fill some sort of storage (database, file, shared memory)
Once the makeshift-queue has been populated, before exiting the PHP scripts raise SIGUSR2 for a daemon process which reads the queue
You have a daemon running that reads the queue, activates upon SIGUSR2 and sends the data via ZMQ socket - you have a single process that uses ZMQ now and multiple PHP processes that interact with it
Since your requirement is a bit unclear, it's quite possible that all I wrote is for nothing so if you can expand your question just a little bit with more info.

Related

Can single thread in php handle simultaneous client connections using mpm prefork module?

I was studying php vs nodejs i.e. blocking vs non-blocking architecture and stucked at one point. If considering apache mpm prefork module so php will spawn a new thread to serve each request. On the other side there are libs such as Rachet, Elephant.io, phpsocket.io, phpDeamon etc using which you can create a node.js like server and easily build a chat app using it.
But if each request lands on a different process (in case of prefork) or different thread (in case of worker or event) than how are these libs actually working? Are they using IPC for communication b/w process or threads? Or what is actually going on behind?
What is actually going behind is driving me crazy. Need some explanations on this.
phpsocket.io doesn't use processes or light-weight processes (threads). It's an event based server.
The server executes as a single thread and waits for new connections / closed connections / activity on the current set of open connections. When one of these occurs, it responds appropriately and goes back to wait for the next event.

Async/Thread on PHP7 with FPM

I found that pthreads does not work on web environment. I use PHP7.1 on FPM on Linux Debian which i also use Symfony 3.2. All I want to do is, for example:
User made a request and PUT a file (which is 1GB)
PHP Server receives the file and process it.
Immediately return true to user (jsonResponse) without awaiting processing uploaded file
Later, when processing file is finished (move, copy, duplicate whatever you want) just add an event or do callback from background and notify user.
Now. For this I created Console Command. I execute a Process('bin/console my:command')->start(); from background and I do my processing. But this is killing a fly with bazooka for me. I have to pass many variables to this executable command.
All I want to is creating another thread and just return to user without awaiting processing.
You may say this is duplicate. And point to pthreads. But pthreads stated that it is only intended for CLI. Also last version of pthreads doesn't work with symfony. (fatal error).
I am stuck at this point and have doubt if should I stay with creating processes for each uploaded file or move to python -> django
You don't want threads. You want a job queue. Have a look at Gearman or similar things.
Gearman provides a generic application framework to farm out work to other machines or processes that are better suited to do the work. It allows you to do work in parallel, to load balance processing, and to call functions between languages. It can be used in a variety of applications, from high-availability web sites to the transport of database replication events. In other words, it is the nervous system for how distributed processing communicates.

Inter-process communication in PHP

I'm working on a PHP platform which gives to developers some features like cron jobs, events and WebSocket communications, for do that I run three different daemons written in PHP, so admins can disable a specific feature. When I start them, after fork, the daemon starter saves the PID on my database and then includes the daemon PHP file. I need to allow developers to easily communicate with these daemons using the specific PHP class. I've seen that exist many different methods for communicate with processes, I've seen for example the proc_open function but it looks like must run a new command for communicate with it. I'm looking for something like PHP sockets but which allow me to open the socket to a PID and without using a port (if it's possible) for avoid conflicts with other daemons sockets. Which is the better way for do that with native instruments of PHP?
One more detail: these daemons may be able to manage pretty big load of connections, events are propagated also to clients through WebSocket or AJAX polling so event and WebSocket daemons communicate between them.
Using a process based approach and reusing the same process ( presumed from your explanation ), and communicating with it without using sockets would be difficult. If you are not that bothered about scalability beyond a server, then it would be fine. You will have to at least use a socket (network or unix), then make the process bind and listen on a random port, and save the port number or unix path in the database, along with the PID.
Another (old fashioned option) would be to make use of xinetd; make your daemons started and managed by xinetd. Here you are really rewiring the stdin and stdout using sockets, by out-sourcing it to xinetd daemon.

how are concurrent requests handled in PHP (using - threads, thread pool or child processes)

I understand that PHP supports handling multiple concurrent connections and depending on server it can be configured as mentioned in this answer
How does server manages multiple connections does it forks a child process for each request or does it handle using threads or does it handles using a thread pool?
The linked answer says a process is forked and then the author in comment says threads or process, which makes it confusing, if requests are served using child-processes, threads or thread pool?
As I know, every webserver has it's own kind of handling multpile simultanous request.
Usually Apache2 schould fork a child process for each new request. But you can somehow configure this behaviour as mentioned in your linked StackOverflow answer.
Nginx for example gets every request in one thread (processes new connections asyncronously like Node.js does) or sometimes uses caching (as configured; Nginx could also be used as a load balancer or HTTP proxy). It's a thing of choosing the right webserver for your application.
Apache2 could be a very good webserver but you need more loadbalancing when you want to use it in production. But it also has good power when having multiply short lasting connections or even documents which don't change at all (or using caching).
Nginx is very good if you expect many long lasting connections with somehow long processing time. You don't need that much loadbalancing then.
I hope, I was able to help you out with this ;)
Sources:
https://httpd.apache.org/docs/2.4/mod/worker.html
https://anturis.com/blog/nginx-vs-apache/
I recommend you to also look at: What is thread safe or non-thread safe in PHP?
I think the answer depends on how the web server and the cgi deploy.
In my company, we use Nginx as the web server and php-fpm as cgi, so the concurrent request is handled as process by php-fpm, not thread.
We configure the max number of process, and each request is handled by a single php process, if more requests(larger than the max number of process) come , they wait.
So, I believe PHP itself can support all of them, but how to use it, that depends.
After doing some research I ended up with below conclusions.
It is important to consider how PHP servers are set to be able to get insights into it.For setting up the server and PHP on your own, there could be three possibilities:
1) Using PHP as module (For many servers PHP has a direct module interface (also called SAPI))
2) CGI
3) FastCGI
Considering Case#1 PHP as module, in this case the module is integrated with the web server itself and now it puts the ball entirely on web server how it handles requests in terms of forking process, using threads, thread pools, etc.
For module, Apache mod_php appears to be very commonly used, and the Apache itself handles the requests using processes and threads in two models as mentioned in this answer
Prefork MPM uses multiple child processes with one thread each and
each process handles one connection at a time.
Worker MPM uses
multiple child processes with many threads each. Each thread handles
one connection at a time.
Obviously, other servers may take other approaches but, I am not aware of same.
For #2 and #3, web server and PHP part are handled in different processes, and how a web server handles the request and how it is further processed by application(PHP part) varies. For e.g.: NGINX may handle the request using asynchronous non-blocking I/O and Apache may handle requests using threads, but, how the request would be processed by FastCGI or CGI application is a different aspect as described below. Both the aspects i.e. how web server handles requests and how PHP part is processed would be important for PHP servers performance.
Considering #2, CGI protocol has makes web server and application (PHP) independent of each other and CGI Protocol requires application and web server to be handled using different process and the protocol does not promote reuse of the same process, which in turn means a new process is required to handle each request.
Considering#3, FastCGI protocol overcomes the limitation of CGI by allowing process re-use. If you check IIS FastCGI link FastCGI addresses the performance issues that are inherent in CGI by providing a mechanism to reuse a single process over and over again for many requests.
FastCGI maintains compatibility with non-thread-safe libraries by
providing a pool of reusable processes and ensuring that each process
handles only one request at a time.
That said, in case of FastCGI it appears that the server maintains a process pool and it uses the process pool to handle incoming client requests and since, the process pool does not require thread safe check, it provides a good performance.
PHP does not handle requests. The web server does.
For Apache HTTP Server, the most popular is "mod_php". This module is actually PHP itself, but compiled as a module for the web server, and so it gets loaded right inside it.
Since with mod_php, PHP gets loaded right into Apache, if Apache is going to handle concurrency using its Worker MPM (that is, using Threads)
For nginx PHP is totally outside of the web server with multiple PHP processes
It gives you choice sometimes to use non-thread safe or thread safe PHP.
But setlocale() function (when supported) is actually modifies the operation system process status and it is not thread safe.
You should remember it when you are not sure of how legacy code works.

PHP - Calling pcntl_fork from web app (not cli) - Why not?

I am familiar with the various methods available within php for spawning new processes, forking, etc... Everything I have read urges against using pcntl_fork from within a web-accessible app. Can anyone tell me why this is not recommended?
At a fundamental level, I can see how if you are not careful, things could quickly get out of hand. But what if you are careful? In my case, I would like to pcntl_fork my parent script into a new child, run a short series of specific functions, and then close the child. Seems pretty straightforward, right? Would it still be dangerous for me to try this?
On a related note, can anyone talk about the overhead involved in doing this a different way... Calling proc_open() to launch an entirely new PHP process? Will I lose any possible speed increase by having to launch the new process?
Background: Consider a site with roughly 2,000 concurrent users running fastcgi.
Have you considered gearman for 'forking' new processes? It's also described as 'a distributed forking mechanism' so your workers do not need to be on the same machine.
Synchronous and asynchronous calls are also available.
You will find it here: http://gearman.org/ and it might be a candidate solution to the problem.
I would like to propose another possibility... Tell me what you think about this.
What if I created a pool of web servers whose sole job was to respond to job requests from the master application server? I would have something like this:
Master Application Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Instead of spawning new PHP processes on my master application server, I would send out job requests to my "workers" using asynchronous sockets. The workers would then run these jobs in realtime and send the result back to the main application server.
Has anyone tried this? Do you foresee any problems? It seems to me that this might work wonderfully.
The problem is not that the app is web-accessible.
The problem is that the web server (or here the FastCGI module) may not handle forks very well. Just try yourself.

Categories