I've successfully implemented next scheme: a php socket server script that broadcasts message to all connected client scripts, and those client scripts are listened by event-source on the front-end script. So, basically, scheme is: server.php -> client.php -> front.php.
But I've been struggling with this part from client.php:
while(TRUE)
{
$read = array();
$read[] = $client_socket;
if( socket_select($read, $write, $except, 10) === FALSE ){
$errorcode = socket_last_error();
$errormsg = socket_strerror($errorcode);
file_put_contents("select_result.log", "Could not listen on socket : [$errorcode] $errormsg".PHP_EOL);
}
if( !($client_message = #socket_read($client_socket, 1024)) ){
$errorcode = socket_last_error();
$errormsg = socket_strerror($errorcode);
file_put_contents("read_result.log", "Couldn't read socket: [$errorcode] $errormsg".PHP_EOL, FILE_APPEND);
}
if($client_message == FALSE) {
$client_message = "event: keep_alive" . PHP_EOL
. "data: keep_alive" . PHP_EOL . PHP_EOL;
}
echo $client_message;
if( ob_get_level() > 0 ) ob_flush();
flush();
}
When a open front.php script (which uses event-source) in browser, it works fine, BUT it also somehow detects disconnect, when I close or reload front.php in browser window. Here is part from server.php that detects disconnect:
if( !($client_input = #socket_read($client, 1024))) {
$errorcode = socket_last_error();
$errormsg = socket_strerror($errorcode);
echo "$errorcode".PHP_EOL;
}
if ( $client_input === FALSE ) {
$socket_key = array_search($client, $client_sockets);
socket_getpeername($client, $client_address, $client_port);
socket_close($client);//?!!!
unset($client_sockets[$socket_key]);
echo "[$client_address : $client_port disconnected]".PHP_EOL;
}
As you see, this happens when data cannot be read from client. Theoretically, when I close front.php, client.php should still be working in an endless cycle on background, as I haven't got any loop breaks, but - surprise! - it stops. Experimentally, I've found that this is happening due to ob_flush() function. When it's commented, server.php does not detect disconnect (browser window close/reload). How could it be? ob_flush() does not return any value, according to manual. No errors detected as well... I don't know what to think, and I can't find why this happening. Please, help.
I am assuming you are using Apache or something in front of client.php. What happens when your front.php runs var es = new EventSource('client.php') is that a dedicated TCP/IP socket is created between the browser and Apache. Apache runs PHP, telling it to load and run client.php. (And client.php then creates a socket to listen to messages from server.php.)
When the browser is shut (or you call es.close(), or for some reason the socket connection between browser and Apache gets lost), then Apache will immediately(*) close down the PHP process (the one running client.php). And when that PHP process disappears, the socket between client.php and server.php will get closed (either immediately, or the next time server.php tries to read/write to it).
*: usually "immediately". Sometimes if a socket is not closed cleanly it can hang around for a little while (a matter of seconds, never more than a minute).
I think your ob_flush() observation is a bit of red herring; my guess would be that by not calling ob_flush() there is something stuck in a buffer, which means Apache keeps the PHP process alive until some time-out expires. Incidentally, I use the #ob_flush();#flush() idiom; the # before ob_flush() is basically doing what your ob_get_level() check is doing (but I vaguely remember hearing there is a situation where that check is not always reliable).
Flushing the output buffer to the browser does not work (anymore) as expected with recent version of Chrome/Firefox.
You would need some kind of register_shutdown_func() which does not work stable aswell for the purpose you want it.
It's generally very unstable to put client/server in a PHP script loaded through the browser due to the design of the HTTP protocol. It is not designed to keep connections alive and send messages the way IRC for example works. HTTP connections can close due to many reasons mostly after 30-60 seconds.
I suggest you to use the PHP command line version for the server and JavaScript for the client purpose.
Also the way you load PHP (mod_php in apache?) affects the whole thing. How many threads/forks of PHP/Apache do you open, how many connections does every process hold,..
Some webservers kill any scripts when the page finished loading after some seconds, its hard to debug and hard to get stable.
Try WebRTC / WebSockets in JavaScript, it's way better for this and integrates better into any HTML5 site.
Related
I want to complete PHP request even if the Ajax session is closed from user side. I manage to do it on Apache by adding ignore_user_abort(True) to my php file but it is not working on IIS 10 with FASTCGI. tried to google any solution with no luck. I'm using PHP v7.4 and IIS v10 on windows server 2019.
below is just a test code to open a file and print the connection status. it is working perfect on Apache but on IIS, it stops writing ponce session is closed.
<?php
ignore_user_abort ( TRUE );
echo "hello\n";
$x=0;
$log_filename = ".\\test.txt";
while (# ob_end_flush());
while ($x<100)
{
echo($x ."\n");
$status = connection_aborted();
file_put_contents($log_filename, $x ." - " .$status ."\n", FILE_APPEND);
$x++;
# flush();
sleep(1);
}
?>
You could try to use the fastcgi_finish_request() This function flushes all response data to the client and finishes the request. This allows for time-consuming tasks to be performed without leaving the connection to the client open.
https://www.php.net/manual/en/install.fpm.php
When running PHP, and you want it to immediately return HTML to the browser, close the connection (ish), and then continue processing...
The following works when the connection is HTTP/1.1, but does not when using Apache 2.4.25, with mod_http2 enabled, and you have a browser that supports HTTP/2 (e.g. Firefox 52 or Chrome 57).
What happens is the Connection: close header is not sent.
<?php
function http_connection_close($output_html = '') {
apache_setenv('no-gzip', 1); // Disable mod_gzip or mod_deflate
ignore_user_abort(true);
// Close session (if open)
while (ob_get_level() > 0) {
$output_html = ob_get_clean() . $output_html;
}
$output_html = str_pad($output_html, 1023); // Prompt server to send packet.
$output_html .= "\n"; // For when the client is using fgets()
header('Connection: close');
header('Content-Length: ' . strlen($output_html));
echo $output_html;
flush();
}
http_connection_close('<html>...</html>');
// Do stuff...
?>
For similar approaches to this problem, see:
close a connection early
Continue processing after closing connection
Continue php script after connection close
And as to why the connection header is removed, the documentation for the nghttp2 library (as used by Apache) states:
https://github.com/nghttp2/nghttp2/blob/master/doc/programmers-guide.rst
HTTP/2 prohibits connection-specific header fields. The
following header fields must not appear: "Connection"...
So if we cannot tell the browser to close the connection via this header, how do we get this to work?
Or is there another way of telling the browser that it has everything for the HTML response, and that it shouldn't keep waiting for more data to arrive.
How to return HTTP response to the user and resume PHP processing
This answer works only when web server communicates to PHP over FastCGI protocol.
To send the reply to user (web server) and resume processing in the background, without involving OS calls, invoke the fastcgi_finish_request() function.
Example:
<?php
echo '<h1>This is a heading</h1>'; // Output sent
fastcgi_finish_request(); // "Hang up" with web-server, the user receives what was echoed
while(true)
{
// Do a long task here
// while(true) is used to indicate this might be a long-running piece of code
}
What to look out for
Even if user does receive the output, php-fpm child process will be busy and unable to accept new requests until they're done with processing this long running task.
If all available php-fpm child processes are busy, then your users will experience hanging page. Use with caution.
nginx and apache servers both know how to deal with FastCGI protocol so there should be no requirement to swap out apache server for nginx.
You can serve your slow PHP scripts via HTTP/1.1 using a special subdomain.
All you need to do is to set a second VirtualHost that responds with HTTP/1.1 using Apache's Protocols directive : https://httpd.apache.org/docs/2.4/en/mod/core.html#protocols
The big advantage of this technic is that your slow scripts can send some datas to the browser long after everything else has been sent thru the HTTP/2 stream.
I want to write my own, small website for controlling my own GPS localizers. The problem is, that they are sending data (over GPRS) using UDP, not HTTP protocol. Can anyone give me any advice on how to receive such data and put it into MySQL database?
I'm looking for something exactly as written in this answer to that question. The only problem is that site mentioned in this answer has expired and script is unavailable.
All I need is an advice or example on how can I receive UDP packet/datagram containing coordinates, speed, date etc. and put this data into MySQL database. How to write a gateway as easiest as possible? All the rest I can handle myself.
I could do this without problems on Windows, as I'm former Delphi developer and writing a gateway between UDP and MySQL isn't a hard job to do there. But I need to run this solution (gateway) on a small, week Linux-based server, which isn't able to run Kylix (Delphi for Linux) programs, so this way is a dead-end.
Can this be done using PHP, JavaScript or by writing Bash script? I was thinking about node.js, which has similar example on home webpage (and probably many more out in the Internet). But I'm not to familiar with node.js, therefore I don't know, if there aren't better/easiers ways to do this.
It's possible to read data from UDP port using PHP. I am posting a example code which reads data from udp port.
<?php
error_reporting(E_ALL | E_STRICT);
$socket = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP);
socket_bind($socket, '127.0.0.1', 1223);
$from = '';
$port = 0;
socket_recvfrom($socket, $buf, 12, 0, $from, $port);
echo "Received $buf from remote address $from and remote port $port" . PHP_EOL;
?>
and to insert that data into MySQL database may be you need to use a daemon, go through this link
http://phpdaemon.net/
Maybe socket_recvfrom might interest you?
Here is, what I found myself.
General
As Venkat wrote, you can write a simple listner in pure PHP. You only have to run it via SSH, in PHP in CLI SAPI mode, not via browser, as it will fail on timeout after about 3-5 minutes.
For running in CLI mode, you need to know a full path to PHP and you have to call it with proper switch. For example:
/mnt/ext/opt/apache/bin/php -f /share/Web/projects/gps/gateway.php
PHP CLI does not use stdout for echo (don't know, what it uses). So, replace any echo with storing values to file or database, to see actual effects of your listner working.
You may need to use set_time_limit(0) function for endless, uninterrupted execution; but it was reported (see user contributed notes here), it is hardcoded to 0 for CLI SAPI, so setting this may not be mandatory.
After running your script in CLI mode, you can break it, by using Ctrl+C.
Listner example
Here is an example of a listner, that drops everything, it receives into 'drop.txt' file in the same directory, where script file is placed:
error_reporting(E_ALL | E_STRICT);
$file = './dump.txt';
$socket = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP);
socket_bind($socket, '0.0.0.0', 12345);
while(TRUE)
{
$buf = '';
$from = '';
$port = 0;
socket_recvfrom($socket, $buf, 1024, 0, $from, $port);
$momentum = time();
$entry = $momentum.' -- received "'.trim($buf).'" from '.$from.' on port '.$port.PHP_EOL;
file_put_contents($file, $entry, FILE_APPEND | LOCK_EX);
}
Things, you should keep in mind:
This script uses infinite loop, so the only way to break it after running is to cast Ctrl+C.
Use 0.0.0.0 IP address in socket_bind to listen from all sources (IP addresses) or 127.0.0.1 to limit it to localhost only.
Carefully select third parameter in socket_recvfrom -- a maximum number of bytes that will be received -- to make sure that the data you're waiting for will not be truncated.
You must use full path to files, you're using -- that is why there is $file = './dump.txt', not $file = 'dump.txt' in the code. Without full path, it can only work via webbrowser.
Using database
If you decide to drop recevied UDP packets into database and you choose SQLite for this purpose, you not only have to provide full path to database file, but also an absolute path! So:
$dbhandle = new SQLiteDatabase('/share/Web/projects/gps/data.db');
not:
$dbhandle = new SQLiteDatabase('data.db');
or even:
$dbhandle = new SQLiteDatabase('./data.db');
Second and third attempt will fail on some systems (depending on PHP configuration) and in this case, you'll see warning, that there is no such table, you're looking for, in database file.
Logoff problem fix
If you don't have direct access to the machine, on which you'll be running that listener, and you're connecting via SSH, keep in mind, that your listener will be problably turned off, once you logoff.
To fix this problem, you have to either run your PHP script in daemon mode (by adding & at the end):
/mnt/ext/opt/apache/bin/php -f /share/Web/gps/gateway.php&
Or make use of screen command in run non-daemon version of your listener in "wirtual" terminal.
I'm using the PHP sockets extension (basically a wrapper around the socket(2)-related linux syscalls) and would like to re-use sockets I open while serving one request in the subsequent ones. Performance is a critical factor.
The sockets I open are all to the same IP, which makes the use of other functions like pfsockopen() impossible (because it reuses same single socket every time) and I need several at a time.
The question
If I leave the sockets I open serving one request deliberately open, (I don't call socket_close() or socket_shutdown()) and connect a socket with the exact same parameters to the same IP serving the next request; will linux re-use the previously opened socket / file-descriptor?
What I want to do in the end is to avoid TCP-handshakes on every request.
Additional information:
I use the apache worker MPM - which means that different request can be but are not necessarily served from different processes. For the sake of simplicity let's assume that all requests are served from the same process.
I can get the file-descriptor ID of a open and connected socket in PHP. I can open and read and write to /dev/fd/{$id}, yet to no purpose - it's not communicating with the remote server (maybe this is a naïve approach). If anybody knew how to make this work I'd consider that to be an acceptable answer too.
If you want to reuse the socket in the same process, simply leave the connection open. That is actually your only option of avoiding TCP handshakes. Make sure keepalives are on:
s.setsockopt( socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
If you want to spawn new processes and pass the connection to them, yes, they will be able to write to /dev/fd/{$id} and this will send the data over network. Just make sure that the sockets are not closed during exec (learn about SOCK_CLOEXEC).
Passing the socket to an unrelated process is not possible. You would have to use some form of interprocess communication to accomplish that, and I am not sure that the overhead of TCP handshake in intranet or internet conditions would be enough to justify the complexity and other overhead associated with that.
If I leave the sockets I open serving one request deliberately open,
(I don't call socket_close() or socket_shutdown()) and connect a
socket with the exact same parameters to the same IP serving the next
request; will linux re-use the previously opened socket /
file-descriptor?
No, but you could always keep using the original one, if you are in the same process. What you are talking about is really connection pooling.
While the answer given by Jirka Hanika is correct for most systems, I have come to the conclusion that it regretfully does not apply to PHP; the re-use of sockets using the PHP sockets extension is impossible to achieve from user space.
The code that led to this conclusion is:
function statDescriptors() {
foreach (glob('/proc/self/fd/*') as $sFile) {
$r = realpath($sFile);
// ignore local file descriptors
if($r === false) {
echo `stat {$sFile}`, "\n";
}
}
}
header('Content-Type: text/plain');
statDescriptors();
$oSocket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_set_option($oSocket, SOL_SOCKET, SO_KEEPALIVE, 1);
socket_set_option($oSocket, SOL_SOCKET, SO_REUSEADDR, 1);
socket_connect($oSocket, '173.194.35.33', 80); // Google IP
socket_write($oSocket, 'GET / HTTP/1.0' . "\r\n");
socket_write($oSocket, 'Connection: Keep-Alive' . "\r\n\r\n");
socket_read($oSocket, 1024 * 8);
// socket_close($oSocket); // toggle this line comment during test
echo 'PID is: ', getmypid(), "\n";
statDescriptors();
This code will stat() the current process' open socket file descriptors at the start and end of its' execution. In between it will open a socket with SO_KEEPALIVE set, write a request to it and read a response. Then it will optionally close the socket (toggle line comment) and echo the current process' PID (to make sure you're in the same process).
You will see that regardless if you close the socket or not, the file descriptor created serving the previous request will not exist anymore at the beginning of this cycle's execution and a completely new socket will be created and connected.
I was unable to test SOCK_CLOEXEC since it's not (yet?) implemented in the extension.
(This was tested using PHP 5.4.0)
The answers I've found to this question (such as here, here, and here) all involve pfsockopen(), which seems geared to non-local socket connections. However, the code I've written so far uses php to access a C++ server through a local connection. I want this connection to be persistent (so that I can use it for Comet, incidentally). Here's my non-persistent version:
<?php
session_start();
...
if (($sock = socket_create(AF_UNIX, SOCK_STREAM,0)) === false)
{
echo "socket_create() failed: reason: " . socket_strerror(socket_last_error()) . "\n";
exit();
}
$sess_id = $_SESSION['sess_id'];
$sock_str = '/tmp/sockdir/' . $sess_id; //The socket is named after the php session, not important
if (socket_connect($sock, $sock_str) === false)
{
echo "socket_connect() to " . $sock_str . " failed: reason: " . socket_strerror(socket_last_error($sock)) . "\n";
socket_close($sock);
exit();
}
$msg = $_GET['message'];
// ... do things with $msg
socket_close($sock);
?>
Now I can't simply save '$sock' as a $_SESSION variable and simply access it each time this script is called, I've found. Any tips on what I can do to turn this into a persistent connection?
As the links you provided point out, php is not a persistent language and there is no way to have persistence across sessions (i.e. page loads). You can create a middle ground though by running a second php script as a daemon, and have your main script (i.e. the one the user hits) connect to that (yes - over a socket...) and get data from it.
If you were to do that, and want to avoid the hassel of Web Sockets, try the new HTML5 EventStream API, as it gives you the best of both worlds: A commet like infrastructure without the hackyness of long-polling or the need for a dedicated Web Sockets server.
If you need to keep the connection open, you need to keep the PHP script open. Commonly PHP is just invoked and then closed after the script has run (CGI, CLI), or it's a mixture (mod_php in apache, FCGI) in which sometimes the PHP interpreter stays in memory after your script has finished (so everything associated from the OS to that process would still remain as a socket handle).
However this is never save. Instead you need to make PHP a daemon which can keep your PHP scripts in memory. An existing solution for that is Appserver-In-PHP. It will keep your code in memory until you restart the server. Like the code, you can as well preserve variables between requests, e.g. a connection handle.