I'm using fsockopen() to call a number of connections in a list to see the online status of various ip/host and ports ...
<?php
$socket = #fsockopen($row[2], $row[3], $errnum, $errstr, 1);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online';}
fclose($socket);
if works, I'm not complaining about that, but I have approximately 15 ip/ports that i'm retrieving in a list (php for() command..). I was wondering if there is a better way to do this? This way is VERY slow!?! It is taking about 1-2 minutes for the server to come back with a response for all of them..
Update:
<?php
$socket = #fsockopen("lounge.local", "80", $errnum, $errstr, 30);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online'; }
?>
It will display in a list: "ReadyNAS AFP readynas.local:548 online"
I don't know what more I can tell you? It just takes forever to load the collection of results...
From my own experience:
This code:
$sock=fsockopen('www.site.com', 80);
is slower compared to:
$sock=fsockopen(gethostbyname('www.site.com'), 80);
Tested in PHP 5.4. If doing many connections at the same time one could keep host resolution result and re-use it, to further reduce script time execution, for example:
function myfunc_getIP($host) {
if (isset($GLOBALS['my_cache'][$host])) {
return $GLOBALS['my_cache'][$host];
}
return $GLOBALS['my_cache'][$host]=gethostbyname($host);
}
$sock=fsockopen(myfunc_getIP('www.site.com'), 80);
If you plan to "ping" some URL, I would advise doing it with curl, why? you can use curl to send pings in parallel, have a look at this -> http://www.php.net/manual/en/function.curl-multi-init.php. In a previous project, it was supposed to feed Real Time Data to our server, we used to ping hosts to see if they are alive or not and Curl was the only option that helped us.
Its an advice, may not be a right solution for your problem.
The last parameter to fsockopen() is the timeout, set this to a low value to make the script complete faster, like this:
fsockopen('192.168.1.93', 80, $errNo, $errStr, 0.01)
Have you compared the results of fsockopen(servername) versus fsockopen(ip-address)? If the timeout parameter does not change a thing, the problem may be in your name server. If fsockopen with an IP address is faster, you'll have to fix your name server, or add the domains to /etc/hosts file.
I would recommend doing this a bit different.
Put this hosts in a table in a database something like:
++++++++++++++++++++++++++++++++++++
| host | port | status | timestamp |
++++++++++++++++++++++++++++++++++++
And move the status checking part in a cron script that you run it once every 5 minutes or how often you want.
This script will check the host:port and update status and timestamp for each record and in your page you will just do a db query and show the host, its status and when was last checked (something like: 1minute ago, etc...)
This way your page will load fast.
According to the php manual, there's a timeout parameter. Try setting it to a lower value.
Edit: To add to Daniel's answer, nmap might be the best tool to use. Set it up with a cron job to scan and update your records every X minutes. Something like
$ for ip in $(seq 6 8);
do
port_open=$(nmap -oG - -p 80 10.1.0.$ip|grep open|wc -l);
echo "10.1.0.$ip:$port_open";
done
10.1.0.6:1
10.1.0.7:1
10.1.0.8:0
I had an issue where fsockopen requests were slow, but wget was really snappy. In my case, it was happening because the hostname had both an ipv4 and ipv6 address, but ipv6 was down. So it took 20 or so seconds on each request for the ipv6 to time out.
Related
I'm running a website which monitors some servers and there are more servers added daily.
Sadly my current solution is very slow (about 10 seconds loadtime for 31 servers). I'm using fsockopen which checks the IP and port. Because I parse a XML with the entries (host and port) I had to create a function and use this function in the parser so the visitors of the website can see the online or offline status of the server.
My current "checkserver" function looks like this:
function checkServer($ip, $port)
{
$fsockopen = #fsockopen($ip, $port, $errorNo, $errorStr, 1);
if (!$fsockopen)
return false;
else return true;
}
And in the parser the "if" rule for the server status looks like this:
if (checkServer((string)$server->host, (string)$server->port))
{
echo "SERVER ONLINE!";
}
else
{
echo "SERVER OFFLINE!";
}
Where $server is every single listed server in the XML <serverlist></serverlist> tag.
I already tried to change the timeout of fsockopen from 1 to 0.1 but some servers appear offline then and the loadtime is still at 8-10 seconds.
How could I speedup the loadtime? Could someone please help me with this?.. the project is very important for me. Thank you! I really appreciate every helpful answer!
First of all I would suggest caching. I am not sure how many users will open the page, but if you have multiple users per second opening the page you will have a lot of traffic to handle which on long term could create issues.
You have 2 solutions :
Using asynchronous events can allow you to do what you wish.T here are some libraries outhere doing this that can help you. I have used none sofar so I can't say which is best.
Using a library that cheats and uses exec & command lines : https://github.com/oliverde8/PHP-AsynchronousJobs
Using pthread library, this isn't a library coded in php, so it binaries you need to add to your http://pthreads.org/. You can add another library ontop to make the usage easier
Finally using javascript, so you open your page, then some ajax calls your php individually for each server & ask status.
collages I trying to find a solution for my case with slow rendering on the results from my PHP function when I call it many times for different hosts.
I want to make a pinging system based on the return from the exec() function (it's alive or it's not), I think that the problem comes from this that I call the exec() function many times for every host and the result delay above 15 seconds before server return all results and after that browser renders the information. It's will be great if I can make this rendering time on max 3 seconds.
I use chrome and xampp for windows.
I try to do it with this code for exec() function :
?php
function Pinger($host) {
exec("ping -n 1 " . $host, $output, $result);
if ($result == 0)
return '<img src = "up.jpg">';
else
return'<img src = "down.jpg">';
}
?>
And call it from another php file with html like that:
<div id="hostname>
<?php echo Pinger('host');>
</div>
The number of the host is above 100.
Maybe if I use another faster function or something else?
Thank you!
The ping command has a lot of options to speed up response, e.g. set timeout (-w, -W) or set number of retries (-c). Try
man ping
on the UNIX command line or see the UNIX manual page online.
Aditionally of you know the IP address you can turn off DNS lookup (-n) which give some milliseconds as well.
I'm trying to make a paypal IPN system, this is a system of paypal to automatically check money transfers. They provide a basic system script to do it.
The system is easy, you get $_POST[] on your script, and then open a socket versus paypal, and they response to you valid or invalid word in the socket.
My problem is that opening the socket, 50% of times i'm getting connection lost. When the script connect, I don't have any problem. So I changed it to 20 trys, instead 1:
<?
//...
mail("mi#mail.com", "subject", "executing", "some headers"); //mailme when this is execute
$try = 20;
do{
$fp = #fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 15);
$try--;
}while($try>0 && !$fp);
if (!$fp) { // HTTP ERROR
mail("mi#mail.com", "subject", "error_message_not_connecting", "some headers");
} else {
mail("mi#mail.com", "subject", "connected_reading_socket", "some headers");
//fputs(..); and the loop reading working.
}
?>
In my test, it works now 100% of severals trys. But in real transfers, it doesn't work 20-30% of times. I'm getting the 1st mail, but never the second one in that fails.
I'm thinking.. If paypal only open the connection to my server 1 second, can the php script stop after some trys, and stop going on? or any idea what is wrong here?
Sending the mail can fail too, especially if you have network issues. You should log the failure conditions, for both mail() as well as your fsockopen, so you can revisit them afterwards.
Also, your fsockopen can get stuck. You have a 15 second timeout and you try 20 times, so your script will work for 20*15=300 seconds = 5 minutes, which is probably longer than your PHP script timeout -> PHP would abort your script mid-process, right? Max execution time is only 30 seconds by default in PHP.
A PHP script can be stopped with exit;.
You can pause the php script proccessing with sleep(nr_sec).
I used to get similar problems. Strange behavior when usin sockets.
Better use CURL instead, it's more stable.
http://leepeng.blogspot.com/2006/04/standard-paypal-php-integration.html
I found the error. A php can be stopped when a users close the conection to the server (usually by click stop button on browser, or in this case a socket closed by paypal).
There are 3 ways to stop a script.
1-by finish the script
2-by user closeing the conection to the server
3-by timeout
I used the function ignore_user_abort(true), and I dont have more problems.
http://php.net/manual/en/function.ignore-user-abort.php
I've written a little monitoring script in PHP, which should monitor a virtual directory and it's active directories. Everything works fine but when the virtual directory service freezes is my ldap_connect() not able to connect but also doesn't get an error back. So my whole script stands still. I think that the ldap_connect function gets a timeout back (like when you try to ping an IP and it's not reachable).
That's my connect command:
$connection = ldap_connect($hostname, $port) or die("Could not connect to {$hostname});
And I haven't found something in the manual for ldap_connect() (manual) about a timelimit parameter in which you could define how long the function should try to connect until it aborts.
How ever I wasn't quite able to come up with a solution with try and catch or something like this. I also didn't wanted to use the set_time_limit() function because my script needs to be run until the end.
I appreciate every help :)
Thanks and greetings
Tim
http://www.php.net/manual/en/function.ldap-set-option.php
particular the following options :-
LDAP_OPT_NETWORK_TIMEOUT
LDAP_OPT_TIMELIMIT
http://www.php.net/manual/en/function.ldap-set-option.php
try set LDAP_OPT_REFERRALS in 0
If you don't want your PHP program to wait XXX seconds before giving up in a case when one of your corporate DC's have failed,
and since ldap_connect() does not have a mechanism to timeout on a user specified time,
this is my workaround which shows excellent practical results.
function serviceping($host, $port=389, $timeout=1)
{
$op = fsockopen($host, $port, $errno, $errstr, $timeout);
if (!$op) return 0; //DC is N/A
else {
fclose($op); //explicitly close open socket connection
return 1; //DC is up & running, we can safely connect with ldap_connect
}
}
// ##### STATIC DC LIST, if your DNS round robin is not setup
//$dclist = array('10.111.222.111', '10.111.222.100', '10.111.222.200');
// ##### DYNAMIC DC LIST, reverse DNS lookup sorted by round-robin result
$dclist = gethostbynamel('domain.name');
foreach ($dclist as $k => $dc) if (serviceping($dc) == true) break; else $dc = 0;
//after this loop, either there will be at least one DC which is available at present, or $dc would return bool false while the next line stops program from further execution
if (!$dc) exit("NO DOMAIN CONTROLLERS AVAILABLE AT PRESENT, PLEASE TRY AGAIN LATER!"); //user being notified
//now, ldap_connect would certainly connect succesfully to DC tested previously and no timeout will occur
$ldapconn = ldap_connect($dc) or die("DC N/A, PLEASE TRY AGAIN LATER.");
Also with this approach, you get a real nice fail over functionality.
Take for an example a company with a dozen of DC-a distributed along distant places.
This way your PHP program will always have high availability if at least one DC is active at present.
You'll need to use an API that supports time-outs. Connection time-outs are not supported in a native fashion by LDAP (the protocol). The timelimit is a client-requested parameter that refers to how long the directory will spend processing a search request, and is not the same as a "connect time-out".
I have a free application that is very used and I get around 500 to 1000 concurrent users from time to time.
This application is a desktop application that will communicate with my website API to receive data every 5 ~ 15 minutes as well as send back minimum data about 3 selects top every 15 minutes.
Since users can turn the application on and off as they wish the timer for each one of them to query my API may vary and as such I have been hitting the max connection limit available for my hosting plan.
Not wanting to upgrade it for financial matter as well as because it is a non-profitable application for the moment I am searching for other options to reduce the amount of connections and cache some information that can be cached.
The first thing that came to my mind was to use FastCGI with Perl I have tested it for some time now and it seems to work great but I have to problems while using it:
if for whatever reason the application goes idle for 60 the
server kills it and for the next few requests it will reply with
error 500 until the script is respawned which takes about 3+ minutes
(yes it takes that much I have tried my code locally on my own test
server and it comes up instantly so I am sure it is a server issue
of my hosting company but they don't seem like wanting to resolve
it).
the kill timeout which is set to 300 and will kill/restart the
script after that period which would result on the above said at 1)
about the respawn of the script.
Given that I am now looking for alternatives that are not based on FastCGI if there is any.
Also due to the limitations of the shared host I can't make my own daemon and my access to compile anything is very limited.
Are there any good options that I can archive this with either Perl or PHP ?
Mainly reduce the database open connections to a minimum and still be able to cache some select queries for returning data... The main process of the application is inserting/updating data anyway so there inst much to cache.
This was the simple code I was using for testing it:
#!/usr/bin/perl -w
use CGI::Simple; # Can't use CGI as it doesn't clear the data for the
# next request haven't investigate it further but needed
# something working to test and using CGI::Simples was
# the fastest solution found.
use DBI;
use strict;
use warnings;
use lib qw( /home/my_user/perl_modules/lib/perl/5.10.1 );
use FCGI;
my $dbh = DBI->connect('DBI:mysql:mydatabase:mymysqlservername',
'username', 'password',
{RaiseError=>1,AutoCommit=>1}
) || die &dbError($DBI::errstr);
my $request = FCGI::Request();
while($request->Accept() >= 0)
{
my $query = new CGI::Simple;
my $action = $query->param("action");
my $id = $query->param("id");
my $server = $query->param("server");
my $ip = $ENV{'REMOTE_ADDR'};
print $query->header();
if ($action eq "exp")
{
my $sth = $dbh->prepare(qq{
INSERT INTO
my_data (id, server) VALUES (?,INET_ATON(?))
ON DUPLICATE KEY UPDATE
server = INET_ATON(?)});
my $result = $sth->execute($id, $server, $server)
|| die print($dbh->errstr);
$sth->finish;
if ($result)
{
print "1";
}
else
{
print "0";
}
}
else
{
print "0";
}
}
$dbh->disconnect || die print($DBI::errstr);
exit(0);
sub dbError
{
my ($txt_erro) = #_;
my $query = new CGI::Simple;
print $query->header();
print "$txt_erro";
exit(0);
}
Run a proxy. Perl's DBD::Proxy should fit the bill. The proxy server shouldn't be under your host's control, so its 60-???-of-inactivity rule shouldn't apply here.
Alternatively, install a cron job that runs more often than the FastCGI timeout, simply to wget some "make activity" page on your site, and discard the output. Some CRMs do this to force a "check for updates" for example, so it's not completely unusual, though somewhat of an annoyance here.
FWIW, you probably want to look at CGI::Fast instead of CGI::Simple to resolve your CGI.pm not dealing in the expected manner with persistent variables...