I'm working on a PHP platform which gives to developers some features like cron jobs, events and WebSocket communications, for do that I run three different daemons written in PHP, so admins can disable a specific feature. When I start them, after fork, the daemon starter saves the PID on my database and then includes the daemon PHP file. I need to allow developers to easily communicate with these daemons using the specific PHP class. I've seen that exist many different methods for communicate with processes, I've seen for example the proc_open function but it looks like must run a new command for communicate with it. I'm looking for something like PHP sockets but which allow me to open the socket to a PID and without using a port (if it's possible) for avoid conflicts with other daemons sockets. Which is the better way for do that with native instruments of PHP?
One more detail: these daemons may be able to manage pretty big load of connections, events are propagated also to clients through WebSocket or AJAX polling so event and WebSocket daemons communicate between them.
Using a process based approach and reusing the same process ( presumed from your explanation ), and communicating with it without using sockets would be difficult. If you are not that bothered about scalability beyond a server, then it would be fine. You will have to at least use a socket (network or unix), then make the process bind and listen on a random port, and save the port number or unix path in the database, along with the PID.
Another (old fashioned option) would be to make use of xinetd; make your daemons started and managed by xinetd. Here you are really rewiring the stdin and stdout using sockets, by out-sourcing it to xinetd daemon.
Related
I am not pretty familiar with PHP threads, as long as I am searching for options for using threads in PHP, the most suitable tool I can find is pthreads. Though it is very convenient to use, it requires ZTS and it is clearly mentioned in the documents that this tool cannot be used in a web server environment.
Warning The pthreads extension cannot be used in a web server
environment. Threading in PHP is therefore restricted to CLI-based
applications only.
So I was wondering what is the best way to use threads or multi threads in an web server environment in PHP.
Yaba daba don't.
Why not? Because other languages do threads better.
What you want to do can be done with a worker that consumes events from a queue. This is how we do things in the PHP world.
Essentially you run another PHP process somewhere else (from a cron job, for example) that does your background processing. The web-related (fpm) workers should be as light weight as possible and only submit these tasks or events to the queue.
Multi threading (not actually) with PHP is achieved by a http server sending multiple requests to a php-fpm daemon (or mod_php if you are so inclined) and by running schedulers or workers in the background as separate, independent processes.
Does anyone have an idea if there is already a zmq module for apache? If there is please share the link or any reference. My scenario is as follows:
Server Config:
Apache 2.4.12, with prefork
PHP 5.5
ZMQ 4.0.X
My problem is, whenever I try to create a zmq socket(pub) connection from my application to a separate service (SUB) with a streamer device in between, it creates a new socket everytime the application is initialized as my apache is in prefork mode, creating new instance(child) on every request. How can I create a single context/socket where any number of PHP requests from subsequent apache child processes can send data to the socket, which will avoid the creation of multiple sockets and exhausting the system resources. This will also, I believe, reduce the overhead caused due to creation of new sockets and make it faster.
As an alternative is it possible to create an apache module, whose functions and resources I can access from PHP application and use it to just send data, where the context and socket are created only once, and are persistent, during apache load.
Short answer - you can't. Your problem in this is Apache and how it works - it shuts down the PHP process after request finishes. Also, you can't share a context or a socket created in an Apache process between PHP processes.
I don't know what you're trying to do or why you even exhaust system resources (quite odd), but if I were you I'd use a more advanced server that uses ZeroMQ internally for its transport layer: Mongrel2. You could create a PHP extension, serve PHP via FPM and then have Apache proxy requests to your PHP-FPM which can then pool the already existing ZMQ connections. However, I would expand the question with how the resources are exhausted that fast.
If that's all too much, then you can consider this:
PHP processes spawned by Apache accept the data and fill some sort of storage (database, file, shared memory)
Once the makeshift-queue has been populated, before exiting the PHP scripts raise SIGUSR2 for a daemon process which reads the queue
You have a daemon running that reads the queue, activates upon SIGUSR2 and sends the data via ZMQ socket - you have a single process that uses ZMQ now and multiple PHP processes that interact with it
Since your requirement is a bit unclear, it's quite possible that all I wrote is for nothing so if you can expand your question just a little bit with more info.
I am familiar with the various methods available within php for spawning new processes, forking, etc... Everything I have read urges against using pcntl_fork from within a web-accessible app. Can anyone tell me why this is not recommended?
At a fundamental level, I can see how if you are not careful, things could quickly get out of hand. But what if you are careful? In my case, I would like to pcntl_fork my parent script into a new child, run a short series of specific functions, and then close the child. Seems pretty straightforward, right? Would it still be dangerous for me to try this?
On a related note, can anyone talk about the overhead involved in doing this a different way... Calling proc_open() to launch an entirely new PHP process? Will I lose any possible speed increase by having to launch the new process?
Background: Consider a site with roughly 2,000 concurrent users running fastcgi.
Have you considered gearman for 'forking' new processes? It's also described as 'a distributed forking mechanism' so your workers do not need to be on the same machine.
Synchronous and asynchronous calls are also available.
You will find it here: http://gearman.org/ and it might be a candidate solution to the problem.
I would like to propose another possibility... Tell me what you think about this.
What if I created a pool of web servers whose sole job was to respond to job requests from the master application server? I would have something like this:
Master Application Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Application Worker Server (Apache, PHP - FastCGI)
Instead of spawning new PHP processes on my master application server, I would send out job requests to my "workers" using asynchronous sockets. The workers would then run these jobs in realtime and send the result back to the main application server.
Has anyone tried this? Do you foresee any problems? It seems to me that this might work wonderfully.
The problem is not that the app is web-accessible.
The problem is that the web server (or here the FastCGI module) may not handle forks very well. Just try yourself.
My app takes a loooong list of urls, and split it in X (where X = $threads) so then I can start a thread.php and calculate the urls for it. Then it does GET and POST request to retrieve data
I am using this:
for($x=1;$x<=$threads;$x++){
$pid[] = exec("/path/bin/php thread.php <options> > /dev/null & echo \$!");
}
For "threading" (I know its not really threading, is it forking or what?), I save the pids into a file for later checking if N thread is running and to stop them.
Now I want to move out from php, I was thinking about using python because I'd like to learn more about it.
How can I achieve this kind of "threading" with python? (or ruby)
Or is there a better way to launch multiple background threads in python or ruby that runs in parallel (at the same time)?
The threads doesn't need to communicate between each other or with a main thread, they are independent, they do http request and interact with a mysql db, they may need to access/modify the same table entries (I haven't tought about this or how I will solve it yet).
The app works with "projects", each project has a "max threads" variable and I use a web interface to control it (so I could still use php for the interface [starting/stopping threads] in the new app).
I wanted to use
from threading import Thread
in python, but I've been told those threads wont run in parallel but once at a time.
The app is intended to run on linux web servers.
Any suggestion will be appreciated.
For Python 2.6+, consider the multiprocessing module:
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows
For Python 2.5, the same functionality is available via pyprocessing.
In addition to the example at the links above, here are some additional links to get you started:
multiprocessing Basics
Communication between processes with multiprocessing
You don't want threading. You want a work queue like Gearman that you can send jobs to asynchronously.
It's worth noting that this is a cross-platform, cross-language solution. There are bindings for many languages (including Python and PHP) provided officially, and many more unofficially with a bit of work with Google.
The original intent is effectively load balancing, but it works just as well with only one machine. Basically, you can create one or more Workers that listen for Jobs. You can control the number of Workers and the types of Jobs they can listen for.
If you insert five Jobs into the queue at the same time, and there happen to be five Workers waiting, each Worker will be handed one of the Jobs. If there are more Jobs than Workers, the Jobs get handled sequentially. Your Client (the thing that submits Jobs) can either wait for all of the Jobs it's created to complete, or it can simply place them in the queue and continue on.
I know about PHP not being multithreaded but i talked with a friend about this: If i have a large algorithmic problem i want to solve with PHP isn't the solution to simply using the "curl_multi_xxx" interface and start n HTTP requests on the same server. This is what i would call PHP style multithreading.
Are there any problems with this in the typical webserver environment? The master request which is waiting for "curl_multi_exec" shouldn't count any time against its maximum runtime or memory length.
I have never seen this anywhere promoted as a solution to prevent a script killed by too restrictive admin settings for PHP.
If i add this as a feature into a popular PHP system will there be server admins hiring a russian mafia hitman to get revenge for this hack?
If i add this as a feature into a
popular PHP system will there be
server admins hiring a russian mafia
hitman to get revenge for this hack?
No but it's still a terrible idea for no other reason than PHP is supposed to render web pages. Not run big algorithms. I see people trying to do this in ASP.Net all the time. There are two proper solutions.
Have your PHP script spawn a process
that runs independently of the web
server and updates a common data
store (probably a database) with
information about the progress of
the task that your PHP scripts can
access.
Have a constantly running daemon
that checks for jobs in a common
data store that the PHP scripts can
issue jobs to and view the progress
on currently running jobs.
By using curl, you are adding a network timeout dependency into the mix. Ideally you would run everything from the command line to avoid timeout issues.
PHP does support forking (pcntl_fork). You can fork some processes and then monitor them with something like pcntl_waitpid. You end up with one "parent" process to monitor the children it spanned.
Keep in mind that while one process can startup, load everything, then fork, you can't share things like database connections. So each forked process should establish it's own. I've used forking for up 50 processes.
If forking isn't available for your install of PHP, you can spawn a process as Spencer mentioned. Just make sure you spawn the process in such a way that it doesn't stop processing of your main script. You also want to get the process ID so you can monitor the spawned processes.
exec("nohup /path/to/php.script > /dev/null 2>&1 & echo $!", $output);
$pid = $output[0];
You can also use the above exec() setup to spawn a process started from a web page and get control back immediately.
Out of curiosity - what is your "large algorithmic problem" attempting to accomplish?
You might be better to write it as an Amazon EC2 service, then sell access to the service rather than the package itself.
Edit: you now mention "mass emails". There are already services that do this, they're generally known as "spammers". Please don't.
Lothar,
As far as I know, php don't work with services, like his concorrent, so you don't have a way for php to know how much time have passed unless you're constantly interrupting the process to check the time passed .. So, imo, no, you can't do that in php :)