I have a problem with my PHP script communicating through ZeroMQ with a PHP Daemon running in the backend and waiting for messages. If maybe the daemon is down the requesting php waits for an endless time. If i reload the page firefox ends in a endless loop and I have to restart apache2 to kill the running request. Especially in the development when the daemon isn't finished, it is really annoying. Do somebody know how i can set a timeout, or just say, skip sennding the request when daemon is not reachable (then send admin a message, server is down and send an error back)?
I tried something like this:
$context = new ZMQContext(1);
$req = new ZMQSocket($context, ZMQ::SOCKET_REQ);
$req->connect("tcp://localhost:5557");
$read = $write = array();
// Poll socket for a reply, with timeout
$poll = new ZMQPoll();
$poll->add($req, ZMQ::POLL_OUT);
$events = $poll->poll($read, $write, 3000);
$errors = $poll->getLastErrors();
if($errors)
echo "No connection";
else
echo "connection";
... $data = ....
$req->send(json_encode($data));
2nd Question, I use PHP-Daemon from shaneharter, sometimes when the daemon does not start correctly because of errors or I shut it down with CRTL+C zeromq still reserves the address, when I restart the daemons it throws an exception, this address is already in use.
Can I easily destroy all ZeroMQ connections?
You don't need to poll just to sent a simple message. I think a PUSH socket will serve you better. Set a reasonable linger value and this will attempt to send a message to the PULL socket counterpart, whether it's listening or not.
$context = new ZMQContext();
$socket = new ZMQSocket($context, ZMQ::SOCKET_PUSH);
$socket->setSockOpt(ZMQ::SOCKOPT_LINGER, 2000);
$socket->connect("tcp://localhost:5557");
$socket->send($data);
Related
I use Swoole as a WebSocket server. Once per second I need to broadcast a message to all connected WS clients.
Naive approach: I set a server timer $server->tick() prior to launching a server:
$this->server->tick(1000, function () {
$message = 'hello';
foreach ($this->server->connections as $fd) {
$this->server->push($fd, $message);
}
});
Got errors:
[2020-05-05 12:23:56 #21985.2] ERROR swServer_tcp_send (ERRNO 9009)
can't send data to the connections in master process
What is the correct way to push WebSocket messages not from a Master, but from a Worker process?
I have a function which creates a socket connection and listens on a port number for HL7 messages sent by a laboratory machine via TCP.
If the lab machine is not sending anything, my listen function keeps listening. Is there a way to specify that it should listen only for say 10 seconds and then if there are no messages, an error should be thrown?
$address = '0.0.0.0';
$port = 5600;
// Create a TCP Stream socket
$sock = socket_create(AF_INET, SOCK_STREAM, 0);
// Bind the socket to an address/port
$bind = socket_bind($sock, $address, $port);
// Start listening for connections
socket_listen($sock);
$client = socket_accept($sock);
// Read the input from the client
$input = socket_read($client, 2024);
// Strip all white spaces from input
$segs = explode("|",$input);
// Close the master sockets
$close = socket_close($sock);
This is the solution:
socket_set_option($sock,SOL_SOCKET,SO_RCVTIMEO,array("sec"=>10,"usec"=>0)); // after 10 seconds socket will destroy the connection. Also you can set and uses
This looks like the XY problem.
That the thing you want to measure acts as a client rather implies that you might want to do more than just detect an open TCP connection in your script, e.g. capture some data. Further, the underlying OS has a lot of complex, well tested, reliable and tunable mechanisms for tracking the state of connections.
While you could do as stefo91 suggests and try to manipulate receive timeout, I'm not sure if this is applied in the wait for an initial connection. A better solution would be to set the socket to non-blocking. Don't forget to either:
inject some calls to sleep()/usleep() or
use socket_select()
unless you want your script to be burning a lot of resource with nothing to do.
But depending on a lot of information you've not told us about, the right solution might be to run one script as a server, and a second as a monitor. The second could be polling/parsing the output of netstat to check the connection.
I created a cache server and client in php, no need to ask why, just for fun. The implementation works, but a problem occures every time when:
Server starts
Client connects
Server stores data
Client disconnects
When I stop the running server process, after a client disconnects and trying to restart the server, the socket_bind throws an error, that the address is already in use. The client always closes connection after the data has been sent or recieved and when I check if the port is in use via sudo netstat, the port is not listed. The server looks like this:
public function run()
{
$this->socket = $this->ioHandler->createServerSocket();
while ($this->running) {
while ($connection = #socket_accept($this->socket)) {
socket_set_nonblock($connection);
$this->maintainer->maintainBucket($this->bucket);
$this->maintainer->checkBackup(time(), $this->bucket);
try {
$dataString = $this->ioHandler->readFromSocket($connection);
$data = unserialize($dataString);
($this->actionHandler)($data, $this->bucket, $this->ioHandler, $connection);
} catch (Exception $ex) {
$this->ioHandler->writeToSocket($connection, self::NACK);
$this->ioHandler->closeSocket($connection);
}
}
}
}
I think the problem might be that the server shuts down via a simple ctrl+c or a start-stop-daemon --stop --pidfile and the socket is still open and there might be some automatized service that cleans up dead sockets. How can I properly shut down the server? Can I somehow terminate the process by sending an input via STDIN to kill the server socket? Or am I wrong on the issue? The full code is available on github: https://github.com/dude920228/php-cache
Perhaps it's related to TIME_WAIT state. Long story short - socket_close() may close the socket, but there still may be data to send. Until the data is sent, the port will not be available.
More info is:
here
here
and here
I am using CloudMQTT as a MQTT broker in my Pub-Sub based application. I am using my publisher to publish data to the CloudMQTT server over a topic, and I plan to subscribe to the broker on my webpage to recieve the transmitted information.
I am using this procedure to create a Client(subscriber): https://www.cloudmqtt.com/docs-php.html
Code goes as follows:
// subscribe.php
require("phpMQTT.php");
$host = "hostname";
$port = port;
$username = "username";
$password = "password";
$mqtt = new phpMQTT($host, $port, "ClientID".rand());
if(!$mqtt->connect(true,NULL,$username,$password)){
exit(1);
}
//currently subscribed topics
$topics['topic'] = array("qos"=>0, "function"=>"procmsg");
$mqtt->subscribe($topics,0);
while($mqtt->proc()){
}
$mqtt->close();
function procmsg($topic,$msg){
echo "Msg Recieved: $msg";
}
Here is the phpMQTT.php file: https://github.com/bluerhinos/phpMQTT/blob/master/phpMQTT.php
However, the issue in this case is that it recieves data only when the webpage is open.. I want to keep the connection alive even if the webpage is not open to always recieve published messages, how can I do it?
EDIT : I might be open to using some other technology on the server to handle this subscription process, if anyone can recommend some alternatives
PHP's typically mode of operation is to start a process, wait for an HTTP connection, handle the request and then start a new process. This doesn't fit well with the typical MQTT mode of having a long-running process; hence closing the MQTT connection when you close the web page.
It is possible to subscribe to a MQTT topic in a long-running CLI PHP script, but you will have to have some other mechanism to keep the process running. There are a lot of different ways of doing this, depending on your preferences and operating system:
a script started using /etc/rc.local at system startup
using a init.d script
using a process manager, such as DJB's daemontools or runit
If you are using Ubuntu, then upstart is a popular mechanism
Searching stackoverflow finds the following related question and several answers:
Run php script as daemon process
I am working on a realtime multiplayer game which was built using Ratchet 0.3.3, Laravel 5 and PHP 5.5.9, The server OS is Ubuntu. The server sends approximately 500 bytes of data in each cycle to every user via WebSocket (hundreds of users).
It looks like WebSocket is buffering the requests and sends 5 to 6 requests at once (500 bytes each).
We have 30 millisecond cycles. Is there a way to manually set the WebSocket buffer settings , so my requests can be sent with no delays ?
I found a solution for that problem. First, IoServer class created and extend IoServer from Ratchet.
class IoServer extends \Ratchet\Server\IoServer {
public static function factory(MessageComponentInterface $component, $port = 80, $address = '0.0.0.0') {
$loop = LoopFactory::create();
$socket = new Reactor($loop);
$socket->listen($port, $address);
$sock = socket_import_stream($socket->master);
socket_set_option($sock, SOL_TCP, TCP_NODELAY, true);
return new static($component, $socket, $loop);
}
}
Azure itself does not have any certain restrictions on the buffer setting for the Ubuntu VM. First I would check the current buffer setting values using the commands listed here - How to find the socket buffer size of linux.
Secondly, if you want to configure or tune them, you can refer - http://www.cyberciti.biz/faq/linux-tcp-tuning/
Not sure if this is the answer you are looking for, as Ghedipunk pointed, it might be a issue of Nagle's algo, in which case, you might want to try adding the tcp_nodelay configuration in the tunnel configurations -
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
though I personally never worked with Ratchet.
PS- I wanted to add this as a comment as I am not sure this is the correct answer you are looking for, but SO forum is not allowing me to add a comment.