Proper and fast way to TELNET in PHP. Sockets or cURL - php

Almost all examples of the TELNET implementations in PHP are with sockets (fsockopen). This does not work for me, because it takes an unacceptable amount of time (~ 60 seconds).
I have tried fsockopen for other purposes and found it slow in contrast to cURL.
Question #1: Why are sockets that slow?
Update: I found we need to set stream_set_timeout function, and we can control the socket execution time. I'm curious how to set the proper timeout or how to make it "stop waiting" once the response is received.
I can't get the same thing implemented with cURL. Where should I put the commands which I need to send to telnet? Is CURLOPT_CUSTOMREQUEST the proper option? I'm doing something like this:
class TELNETcURL{
public $errno;
public $errstr;
private $curl_handle;
private $curl_options = array(
CURLOPT_URL => "telnet://XXX.XXX.XXX.XXX:<port>",
CURLOPT_TIMEOUT => 40,
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_HEADER => FALSE,
CURLOPT_PROTOCOLS => CURLPROTO_TELNET
);
function __construct(){
$this->curl_handle = curl_init();
curl_setopt_array($this->curl_handle, $this->curl_options);
}
public function exec_cmd($query) {
curl_setopt($this->curl_handle, CURLOPT_CUSTOMREQUEST, $query."\r\n");
$output = curl_exec($this->curl_handle);
return $output;
}
function __destruct(){
curl_close($this->curl_handle);
}
}
And then something similar to this:
$telnet = new TELNETcURL();
print_r($telnet->exec_cmd("<TELNET commands go here>"));
I am getting "Max execution time exceeded 30 seconds" on curl_exec command.
Question #2: What is wrong with the cURL implementation?

what you need to be doing is using NON-Blocking IO and then Poll for the response. what you are doing now is waiting/hanging for a response that never comes -- thus the timeout.
Personally I've written a lot of socket apps in php they work great -- and I detest cURL as buggy, cumbersome, and highly insecure... just read their bug list you should be appalled.
Go read the Excellent PHP manual complete with many examples for how to do Polled IO they even give you an example telnet server & client.

sockets are not slow. sockets are the base for communication. Curl uses sockets to open a connection to the remote server. Everything works on sockets (i think).
I don't think you can use curl to use a telnet service, well, that's not entirely true, i guess you can connect and send a single command. Curl was designed with http protocol in mind which is stateless (you open a connection, send a request, wait a reply and the close the connection).
Sockets are the only option.
I am getting "Max execution time exceeded 30 seconds" on curl_exec command.
My guess is that the remote server is the culprit. check to see if it works using a regular terminal client, or increase the max_execution_time in php ini.
UPDATE
It seems it is possible to use curl for telnet, check this:
http://www.cs.sunysb.edu/documentation/curl/
But i still think you are better off using sockets.

use pfsockopen instead of fopensock ..its much faster and keeps connection alive all the way

Related

PHP SOAP awfully slow: Response only after reaching fastcgi_read_timeout

The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)

Do Nonblocking PHP streams need a stream_select

I am writing a PHP application using the tcp stream wrapper. When using streams you're able to turn the connection to a non blocking mode.
stream_set_blocking($stream, 0);
stream_set_timeout($stream, 0);
I also noticed that there is a stream_select
stream_select($stream, $stream, $stream, 0)
Following normal linux select I would not bother with the stuff above and just use select with no timeout to make sure it's non_blocking. I noticed the application "seems to work", not sure if because it's locally tested so the connection is super fast and so on. Or if because it's set to non_blocking i am wasting cycles calling select also.

PHP cURL timeout ignored

Using curl_setopt() I have set CURLOPT_CONNECTTIMEOUT_MS to 1000 (1 second) and have set up another script that sleeps for 5 seconds, then responds 200 OK (using sleep()) which I call for testing purposes. My script always waits for the response, even though it should yield in a cURL timeout error.
How do I make the timeout work as expected and interrupt the request?
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_FOLLOWLOCATION => TRUE,
CURLOPT_NOBODY => TRUE,
CURLOPT_PROTOCOLS => CURLPROTO_HTTP | CURLPROTO_HTTPS,
CURLOPT_CONNECTTIMEOUT_MS => 1000,
CURLOPT_MAXREDIRS => 5,
CURLOPT_USERAGENT => 'Linkit/2.x Drupal/7.x',
));
$document = curl_exec($ch);
I have also tried CURLOPT_TIMEOUT_MS and also the variants without the _MS suffixes.
I'm using PHP 5.3.4 with cURL 7.19.7 on OS X 10.6, XAMPP.
The CURLOPT_CONNECTTIMEOUT or CURLOPT_CONNECTTIMEOUT_MS define the maximum amount of time that cURL can take to connect to the server but in your case, the connection is successful so the time-out no longer applies.
You need to use CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS which define the maximum amount of time cURL can execute for.
For a complete list of options supported by PHP, look at the curl_setopt documentation.
The curl library makes a system call and operates independently of php (sidenote: that's why it is possible to take advantage of multi-threading with curl, even though php itself doesn't support threading). So if you make the curl call and then sleep(), curl still runs.
Also, the timeout setting is for how long to wait for the request to timeout, not your script. For instance, if I make a curl request to google.com and google.com is taking forever to respond, the timeout setting lets me tell curl how long to sit around and wait for google.com to respond.
edit:
Okay, so you are saying you have your curl script that makes a request to another script, and that script has the sleep() in it. Okay, well the curl CURLOPT_CONNECTTIMEOUT (or _MS) setting is to tell curl how long to wait around for a response from the requested server - as in, a connection made. When the curl request is made, it is getting a response that a connection was made...then the sleep() is just delaying the output it's giving. So basically, "wait for a response" is not the same as "how long to timeout the curl execution"
What you want to use is CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS
Well, I had the same problem and wasted so much time looking for the solution and found a working solution at the end.
I though I should share it here and this might be helpful for someone in future.
I have simply used both options.
I have used 4 seconds and 8 seconds respectively.
curl_setopt($curl_session, CURLOPT_CONNECTTIMEOUT, 4);
curl_setopt($curl_session, CURLOPT_TIMEOUT, 8);

Check if host computer is online with PHP?

I've been having some issues with my Internet connection and I was wondering what is the fastest, error-less and most reliable way to check if the host computer is connected to the Internet.
I'm looking for something like is_online() that returns true when online and false when not.
I've benchmarked some solutions: file_get_contents with HEAD request, gethostbynamel, checkdnsrr and the following solution seems to be more than 100 faster than all the others:
function is_online()
{
return (checkdnsrr('google.com', 'ANY') && checkdnsrr('yahoo.com', 'ANY') && checkdnsrr('microsoft.com', 'ANY'));
}
Takes about one microsecond per each host, while file_get_contents for instance takes more than one second per each host (when offline).
You could send a ping to a host that is probably up (e.g. Google).
There seems to be no PHP built-in for this, so you'd have to resort to shell commands. The return value of ping on *nix can tell you whether a reply was received.
Update: ping -c1 -q -w1 should be the right command on Linux. This will give you exit code 0 if a reply was received, something else otherwise, and it times out after one second.
Hence, something like this (warning, my PHP is rusty) should do the trick:
function is_online() {
$retval = 0;
system("ping -c1 -q -w1", $retval);
return $retval == 0;
}
Why don't you do a number of HTTP GET (or better still HTTP HEAD for speed) requests on popular web sites? Use majority voting to decide on the answer.
You can sometimes rely on ping too (through a system call in PHP) but note that not all web sites respond to ICMP (ping) requests.
Note that by increasing the number of ping/http requests you make before drawing a conclusion helps with the confidence level of the answer but can't be error free in the worst of cases.
Don't forget this assumes that your server will respond to ICMP requests. If that's the case then I agree, Net_Ping is probably the way to go. Failing that you could use the Net_Socket package, also on PEAR, to attempt a connection to some port that you know will get a response from - perhaps port 7 or port 80 depending on what services you have running.

PHP Background Processes

I'm trying to make a PHP script, I have the script finished but it takes like 10 minutes to finish the process it is designed to do. This is not a problem, however I presume I have to keep the page loaded all this time which is annoying. Can I have it so that I start the process and then come back 10mins later and just view the log file it has generated?
Well, you can use "ignore_user_abort(true)"
So the script will continue to work (keep an eye on script duration, perhaps add "set_time_limit(0)")
But a warning here: You will not be able to stop a script with these two lines:
ignore_user_abort(true);
set_time_limit(0);
Except you can directly access the server and kill the process there! (Been there, done an endless loop, calling itself over and over again, made the server come to a screeching stop, got shouted at...)
Sounds like you should have a queue and an external script for processing the queue.
For example, your PHP script should put an entry into a database table and return right away. Then, a cron running every minute checks the queue and forks a process for each job.
The advantage here is that you don't lock an apache thread up for 10 minutes.
I had lots of issues with this sort of process under windows; My situation was a little different in that I didn't care about the response of the "script"- I wanted the script to start and allow other page requests to go through while it was busy working away.
For some reason; I had issues with it either hanging other requests or timing out after about 60 seconds (both apache and php were set to time out after about 20 minutes); It also turns out that firefox times out after 5 minutes (by default) anyway so after that point you can't know what's going on through the browser without changing settings in firefox.
I ended up using the process open and process close methods to open up a php in cli mode like so:
pclose(popen("start php myscript.php", "r"));
This would ( using start ) open the php process and then kill the start process leaving php running for however long it needed - again you'd need to kill the process to manually shut it down. It didn't need you to set any time outs and you could let the current page that called it continue and output some more details.
The only issue with this is that if you need to send the script any data, you'd either do it via another source or pass it along the "command line" as parameters; which isn't so secure.
Worked nicely for what we needed though and ensures the script always starts and is allowed to run without any interruptions.
I think shell_exec command is what you are looking for.
However, it is disables in safe mode.
The PHP manual article about it is here: http://php.net/shell_exec
There is an article about it here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is another option which you can use, run the script CLI...It will run in the background and you can even run it as a cronjob if you want.
e.g
> #!/usr/bin/php -q
<?php
//process logs
?>
This can be setup as a cronjob and will execute with no time limitation....this examples is for unix based operation system though.
FYI
I have a php script running with an infinite loop which does some processing and has been running for the past 3 months non stop.
You could use ignore_user_abort() - that way the script will continue to run even if you close your browser or go to a different page.
Think about Gearman
Gearman is a generic application framework for farming out work to
multiple machines or processes. It allows applications to complete
tasks in parallel, to load balance processing, and to call functions
between languages. The framework can be used in a variety of
applications, from high-availability web sites to the transport of
database replication events.
This extension provides classes for writing Gearman clients and
workers.
- Source php manual
Offical website of Gearman
In addition to bastiandoeen's answer you can combine ignore_user_abort(true); with a cUrl request.
Fake a request abortion setting a low CURLOPT_TIMEOUT_MS and keep processing after the connection closed:
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
pros
No need to switch methods (Compatible windows & linux)
No need to implement connection handling via headers and buffer (Independent from Browser and PHP version)
cons
Need curl extension
Resources
curl timeout less than 1000ms always fails?
http://www.php.net/manual/en/function.curl-setopt.php#104597
http://php.net/manual/en/features.connection-handling.php
Zuk.
I'm pretty sure this will work:
<?php
pclose(popen('php /path/to/file/server.php &'));
echo "Server started. [OK]";
?>
The '&' is important. It tells the shell not to wait for the process to exit.
Also You can use this code in your php (as "bastiandoeen" said)
ignore_user_abort(true);
set_time_limit(0);
in your server stop command:
<?php
$output;
exec('ps aux | grep -ie /path/to/file/server.php | awk \'{print $2}\' | xargs kill -9 ', $output);
echo "Server stopped. [OK]";
?>
Just call StartBuffer() before any output, and EndBuffer() when you want client to close connection. The code after calling EndBuffer() will be executed on server without client connection.
private function StartBuffer(){
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
#set_time_limit(0);
#ob_implicit_flush(1);
#ob_start();
}
private function EndBuffer(){
$size = ob_get_length();
header("Content-Length: $size");
header('Connection: close');
ob_flush();ob_implicit_flush(1);
}

Categories