I am writing a PHP application using the tcp stream wrapper. When using streams you're able to turn the connection to a non blocking mode.
stream_set_blocking($stream, 0);
stream_set_timeout($stream, 0);
I also noticed that there is a stream_select
stream_select($stream, $stream, $stream, 0)
Following normal linux select I would not bother with the stuff above and just use select with no timeout to make sure it's non_blocking. I noticed the application "seems to work", not sure if because it's locally tested so the connection is super fast and so on. Or if because it's set to non_blocking i am wasting cycles calling select also.
Related
I want to transfer messages from a PHP-based web-frontend to a backend service on a linux server.
I am sending the messages with file_put_contents. The interface works well when the backend service listens and reads the pipe created with mkfifo mypipe.
However, I would like to be prepared for a situation in which the backend service fails. In this case the user of the frontend should be be notified and given alternative options.
Currently, when the backend is not running, the frontend becomes unresponsive, because file_put_contents blocks.
I tried various things to solve the problem. This includes trying to open the pipe with fopen before stream_context_create and setting a timeout with ini_set('default_socket_timeout', 10);
or
$context = stream_context_create(array('http'=>array(
'timeout' => 10
)
));
if(file_put_contents("mypipe",$data,FILE_APPEND | LOCK_EX,$context)==FALSE){
error_log("Could not write to pipe.");
}else{
echo "Sent message";
}
I also tried the PHP-function is_writable("mypipe"), but, as expected, it says yes independent of whether the receiver is listening.
How can I check if the pipe blocks and evade the situation that the frontend becomes unresponsive.
Do fopen, stream_set_blocking, fwrite, and fclose instead of using file_put_contents. When you do that, you'll be able to detect when it would block because fwrite will just return that it didn't write anything instead of blocking.
I'm using the PHP sockets extension (basically a wrapper around the socket(2)-related linux syscalls) and would like to re-use sockets I open while serving one request in the subsequent ones. Performance is a critical factor.
The sockets I open are all to the same IP, which makes the use of other functions like pfsockopen() impossible (because it reuses same single socket every time) and I need several at a time.
The question
If I leave the sockets I open serving one request deliberately open, (I don't call socket_close() or socket_shutdown()) and connect a socket with the exact same parameters to the same IP serving the next request; will linux re-use the previously opened socket / file-descriptor?
What I want to do in the end is to avoid TCP-handshakes on every request.
Additional information:
I use the apache worker MPM - which means that different request can be but are not necessarily served from different processes. For the sake of simplicity let's assume that all requests are served from the same process.
I can get the file-descriptor ID of a open and connected socket in PHP. I can open and read and write to /dev/fd/{$id}, yet to no purpose - it's not communicating with the remote server (maybe this is a naïve approach). If anybody knew how to make this work I'd consider that to be an acceptable answer too.
If you want to reuse the socket in the same process, simply leave the connection open. That is actually your only option of avoiding TCP handshakes. Make sure keepalives are on:
s.setsockopt( socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
If you want to spawn new processes and pass the connection to them, yes, they will be able to write to /dev/fd/{$id} and this will send the data over network. Just make sure that the sockets are not closed during exec (learn about SOCK_CLOEXEC).
Passing the socket to an unrelated process is not possible. You would have to use some form of interprocess communication to accomplish that, and I am not sure that the overhead of TCP handshake in intranet or internet conditions would be enough to justify the complexity and other overhead associated with that.
If I leave the sockets I open serving one request deliberately open,
(I don't call socket_close() or socket_shutdown()) and connect a
socket with the exact same parameters to the same IP serving the next
request; will linux re-use the previously opened socket /
file-descriptor?
No, but you could always keep using the original one, if you are in the same process. What you are talking about is really connection pooling.
While the answer given by Jirka Hanika is correct for most systems, I have come to the conclusion that it regretfully does not apply to PHP; the re-use of sockets using the PHP sockets extension is impossible to achieve from user space.
The code that led to this conclusion is:
function statDescriptors() {
foreach (glob('/proc/self/fd/*') as $sFile) {
$r = realpath($sFile);
// ignore local file descriptors
if($r === false) {
echo `stat {$sFile}`, "\n";
}
}
}
header('Content-Type: text/plain');
statDescriptors();
$oSocket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_set_option($oSocket, SOL_SOCKET, SO_KEEPALIVE, 1);
socket_set_option($oSocket, SOL_SOCKET, SO_REUSEADDR, 1);
socket_connect($oSocket, '173.194.35.33', 80); // Google IP
socket_write($oSocket, 'GET / HTTP/1.0' . "\r\n");
socket_write($oSocket, 'Connection: Keep-Alive' . "\r\n\r\n");
socket_read($oSocket, 1024 * 8);
// socket_close($oSocket); // toggle this line comment during test
echo 'PID is: ', getmypid(), "\n";
statDescriptors();
This code will stat() the current process' open socket file descriptors at the start and end of its' execution. In between it will open a socket with SO_KEEPALIVE set, write a request to it and read a response. Then it will optionally close the socket (toggle line comment) and echo the current process' PID (to make sure you're in the same process).
You will see that regardless if you close the socket or not, the file descriptor created serving the previous request will not exist anymore at the beginning of this cycle's execution and a completely new socket will be created and connected.
I was unable to test SOCK_CLOEXEC since it's not (yet?) implemented in the extension.
(This was tested using PHP 5.4.0)
Almost all examples of the TELNET implementations in PHP are with sockets (fsockopen). This does not work for me, because it takes an unacceptable amount of time (~ 60 seconds).
I have tried fsockopen for other purposes and found it slow in contrast to cURL.
Question #1: Why are sockets that slow?
Update: I found we need to set stream_set_timeout function, and we can control the socket execution time. I'm curious how to set the proper timeout or how to make it "stop waiting" once the response is received.
I can't get the same thing implemented with cURL. Where should I put the commands which I need to send to telnet? Is CURLOPT_CUSTOMREQUEST the proper option? I'm doing something like this:
class TELNETcURL{
public $errno;
public $errstr;
private $curl_handle;
private $curl_options = array(
CURLOPT_URL => "telnet://XXX.XXX.XXX.XXX:<port>",
CURLOPT_TIMEOUT => 40,
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_HEADER => FALSE,
CURLOPT_PROTOCOLS => CURLPROTO_TELNET
);
function __construct(){
$this->curl_handle = curl_init();
curl_setopt_array($this->curl_handle, $this->curl_options);
}
public function exec_cmd($query) {
curl_setopt($this->curl_handle, CURLOPT_CUSTOMREQUEST, $query."\r\n");
$output = curl_exec($this->curl_handle);
return $output;
}
function __destruct(){
curl_close($this->curl_handle);
}
}
And then something similar to this:
$telnet = new TELNETcURL();
print_r($telnet->exec_cmd("<TELNET commands go here>"));
I am getting "Max execution time exceeded 30 seconds" on curl_exec command.
Question #2: What is wrong with the cURL implementation?
what you need to be doing is using NON-Blocking IO and then Poll for the response. what you are doing now is waiting/hanging for a response that never comes -- thus the timeout.
Personally I've written a lot of socket apps in php they work great -- and I detest cURL as buggy, cumbersome, and highly insecure... just read their bug list you should be appalled.
Go read the Excellent PHP manual complete with many examples for how to do Polled IO they even give you an example telnet server & client.
sockets are not slow. sockets are the base for communication. Curl uses sockets to open a connection to the remote server. Everything works on sockets (i think).
I don't think you can use curl to use a telnet service, well, that's not entirely true, i guess you can connect and send a single command. Curl was designed with http protocol in mind which is stateless (you open a connection, send a request, wait a reply and the close the connection).
Sockets are the only option.
I am getting "Max execution time exceeded 30 seconds" on curl_exec command.
My guess is that the remote server is the culprit. check to see if it works using a regular terminal client, or increase the max_execution_time in php ini.
UPDATE
It seems it is possible to use curl for telnet, check this:
http://www.cs.sunysb.edu/documentation/curl/
But i still think you are better off using sockets.
use pfsockopen instead of fopensock ..its much faster and keeps connection alive all the way
My flush mechanism stopped working, i'm not sure why.
I'm trying to run a simple flush example now, with no luck:
echo "before sleep";
flush();
sleep(5);
echo "after sleep";
after doing some reading, and understanding ngin x was was installed on my server lately, I requested it to be disabled for my domain. (the server admin said he disabled it for this specific domain)
also, i tried disabling gzip, added these lines to .htaccess
SetOutputFilter DEFLATE
SetEnv no-gzip dont-vary
also, tried adding these to my php file
ini_set('output_buffering','on');
ini_set('zlib.output_compression', 0);
nothing helps. its sleeping 5 seconds and then displaying all the content together.
I've been using it before, and have been using also through the output buffer (ob_start, ob_flush etc., now just trying to make the simplest example work)
"Stopped working" is a pretty high level. You should actually take a look what works or not to find out more.
This can be done by monitoring the network traffic. You will see how much of the response is already done and in which encoding it's send.
If the response is getting compressed, most compression functions need a certain number of bytes before they can compress them. So even you do a flush() to signal PHP to flush the output buffer, there still can be a place either within PHP output filtering or the server waiting for more to do the compression. So next to compression done by apache, check if your PHP configuration does compression as well and disable it.
If you don't want to monitor your network traffic, the curl command-line utility is doing a pretty well job to display what's going on as well and it might be easier to use it instead of network monitoring.
curl -Ni --raw URL
Make sure you use the -N switch which will disable buffering by curl so you see your scripts/servers output directly.
Please see the section Inspecting HTTP Compression Problems with Curl in a previous answer of mine that shows some curl commands to look into the output of a request while it's done with compression as well.
curl is able to show you eventually compressed data uncompressed and you can disable compression per request, so regardless of the server or PHP output compression settings, you can test more differentiated.
<?php
ini_set('zlib.output_handler', '');
ini_set('zlib.output_compression', 0);
ini_set('output_handler', '');
ini_set('output_buffering', false);
ini_set('implicit_flush', true);
apache_setenv( 'no-gzip', '1' );
for($i = 0; $i < 5; $i++){
echo str_repeat(chr(0), 4096); #flood apache some null bytes so it feels the packet is big enough to be sent...
echo "$i<br/>";
flush();
sleep(1);
}
?>
I'm trying to make a PHP script, I have the script finished but it takes like 10 minutes to finish the process it is designed to do. This is not a problem, however I presume I have to keep the page loaded all this time which is annoying. Can I have it so that I start the process and then come back 10mins later and just view the log file it has generated?
Well, you can use "ignore_user_abort(true)"
So the script will continue to work (keep an eye on script duration, perhaps add "set_time_limit(0)")
But a warning here: You will not be able to stop a script with these two lines:
ignore_user_abort(true);
set_time_limit(0);
Except you can directly access the server and kill the process there! (Been there, done an endless loop, calling itself over and over again, made the server come to a screeching stop, got shouted at...)
Sounds like you should have a queue and an external script for processing the queue.
For example, your PHP script should put an entry into a database table and return right away. Then, a cron running every minute checks the queue and forks a process for each job.
The advantage here is that you don't lock an apache thread up for 10 minutes.
I had lots of issues with this sort of process under windows; My situation was a little different in that I didn't care about the response of the "script"- I wanted the script to start and allow other page requests to go through while it was busy working away.
For some reason; I had issues with it either hanging other requests or timing out after about 60 seconds (both apache and php were set to time out after about 20 minutes); It also turns out that firefox times out after 5 minutes (by default) anyway so after that point you can't know what's going on through the browser without changing settings in firefox.
I ended up using the process open and process close methods to open up a php in cli mode like so:
pclose(popen("start php myscript.php", "r"));
This would ( using start ) open the php process and then kill the start process leaving php running for however long it needed - again you'd need to kill the process to manually shut it down. It didn't need you to set any time outs and you could let the current page that called it continue and output some more details.
The only issue with this is that if you need to send the script any data, you'd either do it via another source or pass it along the "command line" as parameters; which isn't so secure.
Worked nicely for what we needed though and ensures the script always starts and is allowed to run without any interruptions.
I think shell_exec command is what you are looking for.
However, it is disables in safe mode.
The PHP manual article about it is here: http://php.net/shell_exec
There is an article about it here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is another option which you can use, run the script CLI...It will run in the background and you can even run it as a cronjob if you want.
e.g
> #!/usr/bin/php -q
<?php
//process logs
?>
This can be setup as a cronjob and will execute with no time limitation....this examples is for unix based operation system though.
FYI
I have a php script running with an infinite loop which does some processing and has been running for the past 3 months non stop.
You could use ignore_user_abort() - that way the script will continue to run even if you close your browser or go to a different page.
Think about Gearman
Gearman is a generic application framework for farming out work to
multiple machines or processes. It allows applications to complete
tasks in parallel, to load balance processing, and to call functions
between languages. The framework can be used in a variety of
applications, from high-availability web sites to the transport of
database replication events.
This extension provides classes for writing Gearman clients and
workers.
- Source php manual
Offical website of Gearman
In addition to bastiandoeen's answer you can combine ignore_user_abort(true); with a cUrl request.
Fake a request abortion setting a low CURLOPT_TIMEOUT_MS and keep processing after the connection closed:
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
pros
No need to switch methods (Compatible windows & linux)
No need to implement connection handling via headers and buffer (Independent from Browser and PHP version)
cons
Need curl extension
Resources
curl timeout less than 1000ms always fails?
http://www.php.net/manual/en/function.curl-setopt.php#104597
http://php.net/manual/en/features.connection-handling.php
Zuk.
I'm pretty sure this will work:
<?php
pclose(popen('php /path/to/file/server.php &'));
echo "Server started. [OK]";
?>
The '&' is important. It tells the shell not to wait for the process to exit.
Also You can use this code in your php (as "bastiandoeen" said)
ignore_user_abort(true);
set_time_limit(0);
in your server stop command:
<?php
$output;
exec('ps aux | grep -ie /path/to/file/server.php | awk \'{print $2}\' | xargs kill -9 ', $output);
echo "Server stopped. [OK]";
?>
Just call StartBuffer() before any output, and EndBuffer() when you want client to close connection. The code after calling EndBuffer() will be executed on server without client connection.
private function StartBuffer(){
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
#set_time_limit(0);
#ob_implicit_flush(1);
#ob_start();
}
private function EndBuffer(){
$size = ob_get_length();
header("Content-Length: $size");
header('Connection: close');
ob_flush();ob_implicit_flush(1);
}