PHP - Catch timeout exception from ldap_connect() - php

I've written a little monitoring script in PHP, which should monitor a virtual directory and it's active directories. Everything works fine but when the virtual directory service freezes is my ldap_connect() not able to connect but also doesn't get an error back. So my whole script stands still. I think that the ldap_connect function gets a timeout back (like when you try to ping an IP and it's not reachable).
That's my connect command:
$connection = ldap_connect($hostname, $port) or die("Could not connect to {$hostname});
And I haven't found something in the manual for ldap_connect() (manual) about a timelimit parameter in which you could define how long the function should try to connect until it aborts.
How ever I wasn't quite able to come up with a solution with try and catch or something like this. I also didn't wanted to use the set_time_limit() function because my script needs to be run until the end.
I appreciate every help :)
Thanks and greetings
Tim

http://www.php.net/manual/en/function.ldap-set-option.php
particular the following options :-
LDAP_OPT_NETWORK_TIMEOUT
LDAP_OPT_TIMELIMIT

http://www.php.net/manual/en/function.ldap-set-option.php
try set LDAP_OPT_REFERRALS in 0

If you don't want your PHP program to wait XXX seconds before giving up in a case when one of your corporate DC's have failed,
and since ldap_connect() does not have a mechanism to timeout on a user specified time,
this is my workaround which shows excellent practical results.
function serviceping($host, $port=389, $timeout=1)
{
$op = fsockopen($host, $port, $errno, $errstr, $timeout);
if (!$op) return 0; //DC is N/A
else {
fclose($op); //explicitly close open socket connection
return 1; //DC is up & running, we can safely connect with ldap_connect
}
}
// ##### STATIC DC LIST, if your DNS round robin is not setup
//$dclist = array('10.111.222.111', '10.111.222.100', '10.111.222.200');
// ##### DYNAMIC DC LIST, reverse DNS lookup sorted by round-robin result
$dclist = gethostbynamel('domain.name');
foreach ($dclist as $k => $dc) if (serviceping($dc) == true) break; else $dc = 0;
//after this loop, either there will be at least one DC which is available at present, or $dc would return bool false while the next line stops program from further execution
if (!$dc) exit("NO DOMAIN CONTROLLERS AVAILABLE AT PRESENT, PLEASE TRY AGAIN LATER!"); //user being notified
//now, ldap_connect would certainly connect succesfully to DC tested previously and no timeout will occur
$ldapconn = ldap_connect($dc) or die("DC N/A, PLEASE TRY AGAIN LATER.");
Also with this approach, you get a real nice fail over functionality.
Take for an example a company with a dozen of DC-a distributed along distant places.
This way your PHP program will always have high availability if at least one DC is active at present.

You'll need to use an API that supports time-outs. Connection time-outs are not supported in a native fashion by LDAP (the protocol). The timelimit is a client-requested parameter that refers to how long the directory will spend processing a search request, and is not the same as a "connect time-out".

Related

Multi-Threaded pinging hundreds of hosts solution for fsockopen in php

I'm running a website which monitors some servers and there are more servers added daily.
Sadly my current solution is very slow (about 10 seconds loadtime for 31 servers). I'm using fsockopen which checks the IP and port. Because I parse a XML with the entries (host and port) I had to create a function and use this function in the parser so the visitors of the website can see the online or offline status of the server.
My current "checkserver" function looks like this:
function checkServer($ip, $port)
{
$fsockopen = #fsockopen($ip, $port, $errorNo, $errorStr, 1);
if (!$fsockopen)
return false;
else return true;
}
And in the parser the "if" rule for the server status looks like this:
if (checkServer((string)$server->host, (string)$server->port))
{
echo "SERVER ONLINE!";
}
else
{
echo "SERVER OFFLINE!";
}
Where $server is every single listed server in the XML <serverlist></serverlist> tag.
I already tried to change the timeout of fsockopen from 1 to 0.1 but some servers appear offline then and the loadtime is still at 8-10 seconds.
How could I speedup the loadtime? Could someone please help me with this?.. the project is very important for me. Thank you! I really appreciate every helpful answer!
First of all I would suggest caching. I am not sure how many users will open the page, but if you have multiple users per second opening the page you will have a lot of traffic to handle which on long term could create issues.
You have 2 solutions :
Using asynchronous events can allow you to do what you wish.T here are some libraries outhere doing this that can help you. I have used none sofar so I can't say which is best.
Using a library that cheats and uses exec & command lines : https://github.com/oliverde8/PHP-AsynchronousJobs
Using pthread library, this isn't a library coded in php, so it binaries you need to add to your http://pthreads.org/. You can add another library ontop to make the usage easier
Finally using javascript, so you open your page, then some ajax calls your php individually for each server & ask status.

Detecting peer disconnect (EOF) with PHP socket module

I'm having a weird issue with PHP's sockets library: I do not seem to be able to detect/distinguish server EOF, and my code is helplessly going into an infinite loop as a result.
Further explanation below; first of all, some context (there's nothing particularly fancy going on here):
<?php
$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, '127.0.0.1', 8081);
for (;;) {
$read = [$socket];
$except = NULL;
$write = [];
print "Select <";
$n = socket_select($read, $write, $except, NULL);
print ">\n";
if (count($read)) {
print "New data: ";
#socket_recv($socket, $data, 1024, NULL);
$data = socket_read($socket, 1024);
print $data."\n";
}
print "Socket status: ".socket_strerror(socket_last_error())."\n";
}
The above code simply connects to a server and prints what it reads. It's a cut-down version of what I have in the small socket library I'm writing.
For testing, I'm currently using ncat -vvklp 8081 to bind a socket and be a server. With that running, I can fire up the code above and it connects and works - eg, I can type in the ncat window, and PHP receives it. (Sending data from PHP is working too, but I've excluded that code as it's not relevant.)
However, the moment I ^C ncat, the code above enters a hard infinite loop - and PHP says there's no error on the socket.
I am trying to figure out where the button is that whacks PHP upside the head and makes it realize that the peer has disconnected.
socket_get_status() is a great misnomer - it's an alias for stream_get_meta_data(), and it doesn't actually work on sockets!
feof() similarly spouts Warning: feof(): supplied resource is not a valid stream resource.
I can't find a socket_* function for detecting peer EOF.
One of the PHP manual notes for socket_read() initially dissuaded me from using that function so I used socket_recv() instead, but I eventually tried it just in case - but no dice; switching the receive call has no effect.
I have discovered that watching the socket for writing and then attempting to write to it will suddenly make PHP go "oh, wait, right" and start returning Broken pipe - but I'm not interested in writing to the server, I want to read from it!
Finally, regarding the commented part - I would far prefer to use PHP's builtin stream functionality, but the stream_* functions do not provide any means for handling asynchronous connect events (which I want to do, as I'm making multiple connections). I can do stream_socket_client(... STREAM_CLIENT_ASYNC_CONNECT ...) but then cannot find out when the connection has been established (6yo PHP bug #52811).
Okay, I figure I might as well turn the comments above into an answer. All credit goes to Ryan Vincent for helping my thick head figure this out :)
socket_recv will return 0 specifically if the peer has disconnected, or FALSE if any other network error has occurred.
For reference, in C, recv()'s return value is the length of the new data you've just received (which can be 0), or -1 to indicate an error condition (the value of which can be found in errno).
Using 0 to indicate an error condition (and just one arbitrary type of error condition, at that) is not standard and unique to PHP in all the wrong ways. Other network libraries don't work this way.
You need to to handle it like this.
$r = socket_recv($socket, $buf, $len);
if ($r === FALSE) {
// Find out what just happened with socket_last_error()
// (there's a great list of error codes in the comments at
// http://php.net/socket_last_error - considering/researching
// the ramifications of each condition is recommended)
} elseif ($r === 0) {
// The peer closed the connection. You need to handle this
// condition and clean up.
} else {
// You DO have data at this point.
// While unlikely, it's possible the remote peer has
// sent you data of 0 length; remember to use strlen($buf).
}

Redis availability check

Before using redis I want to check it availability, if redis is not available, I'll use mysql, if redis is available I'll use it. How can I do it, if I use predis client?
My first method was:
/**
* #return bool
*/
public function check(){
if(!#fsockopen ( $server['host'], $server['port'], $errno, $errstr, 3 )){
Debug::instance()->log( 'Redis connect error host: ' . $server['host'] . ' port: ' . $server['port'] );
return false;
}
return true;
}
But it was very bad idea, because I occupy free sockets. Now I try to find a better method.
This should do the trick ;)
// Redis configuration
$vm = array(
'host' => '127.0.0.1',
'port' => 6379,
'timeout' => 0.8 // (expressed in seconds) used to connect to a Redis server after which an exception is thrown.
);
$redis = new Predis\Client($vm);
try {
$redis->ping();
} catch (Exception $e) {
// LOG that redis is down : $e->getMessage();
}
if(isset($e)) {
//use MySQL
} else {
/* Use Redis */
}
Use the predis client and the PING command.
return true if PONG is received as response.
return false on CommunicationException.
Redis can handle far more requests than MySQL, which is why it is so often used as a cache for it. On laptop level hardware I have seen it handling over 1M requests per second - single instance no special server side tuning.
First try to pull the data from redis. If the connection fails, go to MySQL. if you get a null back, go to MySQL and add the data to Redis (optionally with a timeout depending in your criteria) and return the data to the client.
Please do not do a ping before every command. If you have a valid connection, try to get the data and handle getting back nothing. Getting null back means there was no data to pull. Performing the ping before every command is wasteful. You spend a round trip for every call. Often the command to get the data is just or nearly just as fast as the ping, especially considering the cost of testing for the pong. When considering that you will then also be testing to see if you got data returned, the ping before every command model is not wise.
Open your connection in your long term process outside the command, or open the connection, query, and then close your socket each time. I prefer the former, but realize that isn't always possible depending on the framework you are using.
And as someone who has infrastructure with many hundreds of MySQL servers with thousands of DBs, no you can not always assume MySQL is there, just as you shouldn't assume Redis will always be there. Crashes happen, networking happens, servers are tripped up or mistakenly bounced.
But you can follow the flow of "is the TCP connection alive" and "then give me data" followed by validating you got the data. As long as you account for failed connections or timed out requests, and handle them, you are fine.
As far as how to actually code using predis, I recommend starting with the docs such as are found at the Predis github page

Options to lessen database open connection?

I have a free application that is very used and I get around 500 to 1000 concurrent users from time to time.
This application is a desktop application that will communicate with my website API to receive data every 5 ~ 15 minutes as well as send back minimum data about 3 selects top every 15 minutes.
Since users can turn the application on and off as they wish the timer for each one of them to query my API may vary and as such I have been hitting the max connection limit available for my hosting plan.
Not wanting to upgrade it for financial matter as well as because it is a non-profitable application for the moment I am searching for other options to reduce the amount of connections and cache some information that can be cached.
The first thing that came to my mind was to use FastCGI with Perl I have tested it for some time now and it seems to work great but I have to problems while using it:
if for whatever reason the application goes idle for 60 the
server kills it and for the next few requests it will reply with
error 500 until the script is respawned which takes about 3+ minutes
(yes it takes that much I have tried my code locally on my own test
server and it comes up instantly so I am sure it is a server issue
of my hosting company but they don't seem like wanting to resolve
it).
the kill timeout which is set to 300 and will kill/restart the
script after that period which would result on the above said at 1)
about the respawn of the script.
Given that I am now looking for alternatives that are not based on FastCGI if there is any.
Also due to the limitations of the shared host I can't make my own daemon and my access to compile anything is very limited.
Are there any good options that I can archive this with either Perl or PHP ?
Mainly reduce the database open connections to a minimum and still be able to cache some select queries for returning data... The main process of the application is inserting/updating data anyway so there inst much to cache.
This was the simple code I was using for testing it:
#!/usr/bin/perl -w
use CGI::Simple; # Can't use CGI as it doesn't clear the data for the
# next request haven't investigate it further but needed
# something working to test and using CGI::Simples was
# the fastest solution found.
use DBI;
use strict;
use warnings;
use lib qw( /home/my_user/perl_modules/lib/perl/5.10.1 );
use FCGI;
my $dbh = DBI->connect('DBI:mysql:mydatabase:mymysqlservername',
'username', 'password',
{RaiseError=>1,AutoCommit=>1}
) || die &dbError($DBI::errstr);
my $request = FCGI::Request();
while($request->Accept() >= 0)
{
my $query = new CGI::Simple;
my $action = $query->param("action");
my $id = $query->param("id");
my $server = $query->param("server");
my $ip = $ENV{'REMOTE_ADDR'};
print $query->header();
if ($action eq "exp")
{
my $sth = $dbh->prepare(qq{
INSERT INTO
my_data (id, server) VALUES (?,INET_ATON(?))
ON DUPLICATE KEY UPDATE
server = INET_ATON(?)});
my $result = $sth->execute($id, $server, $server)
|| die print($dbh->errstr);
$sth->finish;
if ($result)
{
print "1";
}
else
{
print "0";
}
}
else
{
print "0";
}
}
$dbh->disconnect || die print($DBI::errstr);
exit(0);
sub dbError
{
my ($txt_erro) = #_;
my $query = new CGI::Simple;
print $query->header();
print "$txt_erro";
exit(0);
}
Run a proxy. Perl's DBD::Proxy should fit the bill. The proxy server shouldn't be under your host's control, so its 60-???-of-inactivity rule shouldn't apply here.
Alternatively, install a cron job that runs more often than the FastCGI timeout, simply to wget some "make activity" page on your site, and discard the output. Some CRMs do this to force a "check for updates" for example, so it's not completely unusual, though somewhat of an annoyance here.
FWIW, you probably want to look at CGI::Fast instead of CGI::Simple to resolve your CGI.pm not dealing in the expected manner with persistent variables...

PHP fsockopen() painfully slow

I'm using fsockopen() to call a number of connections in a list to see the online status of various ip/host and ports ...
<?php
$socket = #fsockopen($row[2], $row[3], $errnum, $errstr, 1);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online';}
fclose($socket);
if works, I'm not complaining about that, but I have approximately 15 ip/ports that i'm retrieving in a list (php for() command..). I was wondering if there is a better way to do this? This way is VERY slow!?! It is taking about 1-2 minutes for the server to come back with a response for all of them..
Update:
<?php
$socket = #fsockopen("lounge.local", "80", $errnum, $errstr, 30);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online'; }
?>
It will display in a list: "ReadyNAS AFP readynas.local:548 online"
I don't know what more I can tell you? It just takes forever to load the collection of results...
From my own experience:
This code:
$sock=fsockopen('www.site.com', 80);
is slower compared to:
$sock=fsockopen(gethostbyname('www.site.com'), 80);
Tested in PHP 5.4. If doing many connections at the same time one could keep host resolution result and re-use it, to further reduce script time execution, for example:
function myfunc_getIP($host) {
if (isset($GLOBALS['my_cache'][$host])) {
return $GLOBALS['my_cache'][$host];
}
return $GLOBALS['my_cache'][$host]=gethostbyname($host);
}
$sock=fsockopen(myfunc_getIP('www.site.com'), 80);
If you plan to "ping" some URL, I would advise doing it with curl, why? you can use curl to send pings in parallel, have a look at this -> http://www.php.net/manual/en/function.curl-multi-init.php. In a previous project, it was supposed to feed Real Time Data to our server, we used to ping hosts to see if they are alive or not and Curl was the only option that helped us.
Its an advice, may not be a right solution for your problem.
The last parameter to fsockopen() is the timeout, set this to a low value to make the script complete faster, like this:
fsockopen('192.168.1.93', 80, $errNo, $errStr, 0.01)
Have you compared the results of fsockopen(servername) versus fsockopen(ip-address)? If the timeout parameter does not change a thing, the problem may be in your name server. If fsockopen with an IP address is faster, you'll have to fix your name server, or add the domains to /etc/hosts file.
I would recommend doing this a bit different.
Put this hosts in a table in a database something like:
++++++++++++++++++++++++++++++++++++
| host | port | status | timestamp |
++++++++++++++++++++++++++++++++++++
And move the status checking part in a cron script that you run it once every 5 minutes or how often you want.
This script will check the host:port and update status and timestamp for each record and in your page you will just do a db query and show the host, its status and when was last checked (something like: 1minute ago, etc...)
This way your page will load fast.
According to the php manual, there's a timeout parameter. Try setting it to a lower value.
Edit: To add to Daniel's answer, nmap might be the best tool to use. Set it up with a cron job to scan and update your records every X minutes. Something like
$ for ip in $(seq 6 8);
do
port_open=$(nmap -oG - -p 80 10.1.0.$ip|grep open|wc -l);
echo "10.1.0.$ip:$port_open";
done
10.1.0.6:1
10.1.0.7:1
10.1.0.8:0
I had an issue where fsockopen requests were slow, but wget was really snappy. In my case, it was happening because the hostname had both an ipv4 and ipv6 address, but ipv6 was down. So it took 20 or so seconds on each request for the ipv6 to time out.

Categories