How can I get the time of a remote server? - php

I am currently working on a websever that will create a quick diagnose dashboard of other servers.
My requirement is to display the time of these remote servers, it seems that the NTP create some issues and I would like to see that.
I currently have a bat file on my desktop that simply send
net time \\SRV*******
I have also tried:
echo exec('net time \\\\SRV****');
=> result is '0'
But I would like to find a better solution in PHP so anybody of a team can read it on a webpage.
Any idea what I would do?
Note: this is not related to How to get date and time from server ad I want to get the time of a REMOTE server and not local server

You can use NTP protocol to retrieve datetime from a remote server.
Try this code:
error_reporting(E_ALL ^ E_NOTICE);
ini_set("display_errors", 1);
date_default_timezone_set("Europe/...");
function query_time_server ($timeserver, $socket)
{
$fp = fsockopen($timeserver,$socket,$err,$errstr,5);
# parameters: server, socket, error code, error text, timeout
if ($fp) {
fputs($fp, "\n");
$timevalue = fread($fp, 49);
fclose($fp); # close the connection
} else {
$timevalue = " ";
}
$ret = array();
$ret[] = $timevalue;
$ret[] = $err; # error code
$ret[] = $errstr; # error text
return($ret);
}
$timeserver = "10.10.10.10"; #server IP or host
$timercvd = query_time_server($timeserver, 37);
//if no error from query_time_server
if (!$timercvd[1]) {
$timevalue = bin2hex($timercvd[0]);
$timevalue = abs(HexDec('7fffffff') - HexDec($timevalue) - HexDec('7fffffff'));
$tmestamp = $timevalue - 2208988800; # convert to UNIX epoch time stamp
$datum = date("Y-m-d (D) H:i:s",$tmestamp - date("Z",$tmestamp)); /* incl time zone offset */
$doy = (date("z",$tmestamp)+1);
echo "Time check from time server ",$timeserver," : [<font color=\"red\">",$timevalue,"</font>]";
echo " (seconds since 1900-01-01 00:00.00).<br>\n";
echo "The current date and universal time is ",$datum," UTC. ";
echo "It is day ",$doy," of this year.<br>\n";
echo "The unix epoch time stamp is $tmestamp.<br>\n";
echo date("d/m/Y H:i:s", $tmestamp);
} else {
echo "Unfortunately, the time server $timeserver could not be reached at this time. ";
echo "$timercvd[1] $timercvd[2].<br>\n";
}

You can use a powershell script along with WMI (assuming your servers are windows machines)
This will be way faster than the old net time style.
The script needs to be executed with a Domain-Admin (or any other user that has WMI-query permissions)
$servers = 'dc1', 'dc2', 'dhcp1', 'dhcp2', 'wsus', 'web', 'file', 'hyperv1', 'hyperv2'
ForEach ($server in $servers) {
$time = "---"
$time = ([WMI]'').ConvertToDateTime((gwmi win32_operatingsystem -computername $server).LocalDateTime)
$server + ', ' + $time
}
You can achieve this with a scheduled task every 5 minutes.
If you want to display the time "with seconds / minutes" (I wouldn't query each server too often, as time does not change THAT fast), you could add a little maths:
When executing the script, don't store the actual time, but just the offset of each server, compared to your webservers time. (i.e. +2.2112, -15.213)
When your users are visiting the "dashboard" load these offsets and display the webservers current time +/- the offset per server. (Could be "outdated" by some microseconds then, but do you care?)

Why so complicated?
Just found that this is working:
echo exec('net time \\\\SRV****, $output);
echo '<pre>';
print_r($output);
echo '</pre>';

Related

Response to http request from Android application to PHP code with Nginx is too slow when the response is big

In a Linux server Nginx as web server,
Response to http request from Android application to PHP code with Nginx is too slow
The point is the response from the server is as following:
echo "{mResponse}"
When mResponse is small (you say 500 bytes), there is no problem and the response is too quick
When mResponse is big (you say 200K bytes), there is a problem and the response is too slow
I doubted on echo and searched a lot and found out the echo may be too slow
There were some solutions like chuncking the big data into smaller chuncks (4096) and then echoing
Or ob_start and ...
I tested them and finally found out the problem is somewhere else because when I used the following code, the time of echo is ok:
$time_start = microtime(true);
$this->echobig("###{$mCommand}70{$mUserID}{$mResponse}{$mCheckSum}###");
echo "\nThe first script took: " . ( microtime(true) - $time_start) . " sec";
$time_start = microtime(true);
ob_start();
$this->echobig("###{$mCommand}70{$mUserID}{$mResponse}{$mCheckSum}###");
ob_end_flush();
echo "\nThe Second script took: " . ( microtime(true) - $time_start) . " sec";
public function echobig($string)
{
$splitString = str_split($string, 4096);
foreach($splitString as $chunk)
{
echo $chunk;
}
}
On both above codes (The first and second scripts) the time was near
0.0005 sec
which is ok
But the Android application is receiving the response in 13 seconds
As I said when the response is small, the Android application quickly receives the response (in 2 seconds)
Now I doubt on Nginx settings or PHP settings (may be buffer limits somewhere)
I don't know which parameter is problematic
Sorry, It has nothing to do with Nginx or PHP
The problem is the following code to calculate the check sum of the mentioned big response:
$cs = 0;
$content_length = strlen($string);
for ($i=0; $i < $content_length;$i++)
{
$K = mb_substr($string,$i, 1, 'UTF-8');
$A = mb_convert_encoding(mb_substr($string,$i, 1, 'UTF-8'),"UTF-8");
$B = $this->mb_ord_WorkAround($A);
$cs = $cs + $B;
}

How I can process files in a directory all at one time?

I have a directory with 100 of xlxs files. Now what I want to do is to convert all these files into PDF all at one time or some at one time. The conversion process is working fine at the moment with foreach and cron. But it can process or convert files one at a time which increase waiting time at the user end who is waiting for PDF files.
I am thinking about parallel processing at this time but don't know how to implement this.
Here is my current code
$files = glob("/var/www/html/conversions/xlxs_files/*");
if(!empty($files)){
$now = time();
$i = 1;
foreach ($files as $file) {
if (is_file($file) && $i <= 8) {
echo $i.'-----'.basename($file).'----'.date('m/d/Y H:i:s',#filemtime($file));
echo '<br>';
$path_parts = pathinfo(basename($file));
$xlsx_file_name = basename($file);
$pdf_file_name = $path_parts['filename'].'.pdf';
echo '<br>';
try{
$result = ConvertApi::convert('pdf', ['File' => $common_path.'xlxs_files/'.$xlsx_file_name],'xlsx');
echo $log = 'conversion start for '.basename($file).' on '. date('d-M-Y h:i:s');
echo '<br>';
$result->getFile()->save($common_path.'pdf_files/'.$pdf_file_name);
echo $log = 'conversion start for '.basename($file).' on '. date('d-M-Y h:i:s');
echo '<br>';
mail('amit.webethics#gmail.com','test','test');
unlink($common_path.'xlxs_files/'.$xlsx_file_name);
}catch(Exception $e){
$log_file_data = createAlogFile();
$log = 'There is an error with your file .'. $xlsx_file_name.' -- '.$e->getMessage();
file_put_contents($log_file_data, $log . "\n", FILE_APPEND);
continue;
}
$i++;
}
}
}else{
echo 'nothing to process';
}
Any help will be highly appreciated. Thanks
Q : I am thinking about parallel processing at this time but don't know how to implement this.
Fact #1:this is not a kind of a true-[PARALLEL] orchestration of the flow of processing.
Fact #2:a standard GNU parallel (all details kindly read in man parallel) will help you maximise the performance of your processing pipeline, given the list of all files to convert and tweaking other parameters as the amounts of CPU/cores used and RAM-resources you may reserve/allocate to perform this batch conversion as fast as possible.
ls _files_to_convert.mask_ | parallel --jobs _nCores_ \
--load 99% \
--block _RAMblock_ \
... \
--dry-run \
_converting_process_
might serve as an immediate apetiser for what the GNU parallel is capable of.
All credits and thanks are to go to Ole Tange.
You could start multiple PHP scripts at a time. How to do that in detail answer is here: https://unix.stackexchange.com/a/216475/91593
I would go for this solution:
N=4
(
for thing in a b c d e f g; do
((i=i%N)); ((i++==0)) && wait
task "$thing" &
done
)
Another way is to try to use PHP for that. There is in depth answer to this question: https://stackoverflow.com/a/36440644/625521

Using time() inside a while loop

For a school assignment we need to write a PHP script that counts for 1 second.
The following code I wrote should do exactly that was my thought:
$startTijd = time();
$teller = 0;
while($startTijd == time()){
echo 'Iteratie: ' . $teller . '<br>';
$teller++;
}
However, every time I run this or any PHP script similar to it that uses the time() function inside a while loop I get a 502 bad request from the server when I try to visit the page.
Your code as it is would not work (would not count one second exactly), because time() has a granularity of one second, and you have no guarantees that you landed on your page exactly at the tick of a second. So you need to synchronize.
To be clear, imagine calling time() several times, and let's suppose time() outputs in HH:MM:SS instead of Unix timestamps for legibility's sake:
Code
print time()
print time()
print time()
...
Output:
01:15:17
01:15:17
01:15:17
...
01:15:18
01:15:18
01:15:18
...
Your program probably currently does not work correctly because even in the little time that the loop runs, it generates a fantastic quantity of output (as can be seen above, time() remains "valid" for up to a whole second, and in that time a loop can execute lots of times). If there's some sort of resource limit on the PHP process, it's possible that this drives the process over its quota, resulting in the 502 error. You can check that by removing the echo from the loop, and just adding echo "Done." at the end.
You want to count between the instant in time in which time() transitions from 01:15:17 to 01:15:18, up to the instant when it again transitions to 01:15:19. Those instants will be separated by exactly one second.
What you would need to do is:
$startTijd = time()+1; // It is now maybe 01:15:17.93. We want to start
// counting at 01:15:18, so we need to wait for it.
while($startTijd !== time()) {
// Do nothing except maybe a very brief sleep to save CPU
usleep(5); // this is optional anyway.
}
// It is now 01:15:18.000003 and $startTijd is 01:15:18
$teller = 0;
// time() will remain equal to 01:15:18 for one second,
// while wall clock time increases from 01:15:18.000003 to 01:15:18.999999
while ($startTijd == time()) {
// Don't output anything here
$teller++;
}
// New transition detected.
// It is now e.g. 01:15:19.000137 and time() says 01:15:19.
echo 'Iteratie: ' . $teller . '<br>';
Alternately you can use microtime(true):
$teller = 0;
$endTijd = microtime(true) + 1.0;
while ($endTijd >= microtime(true)) {
// Don't output anything here
$teller++;
}
echo 'Iteratie: ' . $teller . '<br>';
Your code makes no sense... your while statment is only true if you computer is fast enough.
$startTijd = 100; # you set here the time represented by a number
while(100 == time() #101) { # here time is some milliseconds or seconds in the future
so after a second your while stops so that make not so much sense. Then use
while(true) {
and stop the while with a condition insight the while loop.

Slow query using MongoDB and PHP

I have two collections. The first one, $bluescan, has 345 documents and some ['company'] values missing. The second one, $maclistResults, has 20285 documents.
I'm running $bluescan against $maclistResults to fill missing ['company'] values.
I created three indexes for $maclistResults using the following PHP code:
$db->$jsonCollectionName->ensureIndex(array('mac' => 1));
$db->$jsonCollectionName->ensureIndex(array('company' => 1));
$db->$jsonCollectionName->ensureIndex(array('mac'=>1, 'company'=>1));
I'm using a MongoLab free account for the DB and running my PHP application locally.
The code below demonstrates the process. It works, does what I need but it takes almost 64 seconds to perform the task.
Is there anything I can do to improve this execution time?
Code:
else
{
$maclist = 'maclist';
$time_start = microtime(true);
foreach ($bluescan as $key => $value)
{
$maclistResults = $db->$maclist->find(array('mac'=>substr($value['mac'], 0, 8)), array('mac'=>1, 'company'=>1));
foreach ($maclistResults as $value2)
{
if (empty($value['company']))
{
$bluescan[$key]['company'] = $value2['company'];
}
}
}
}
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "<pre>";
echo "maclistResults execution time: ".$time." seconds";
echo "</pre>";
Echo output from $time:
maclistResults execution time: 63.7298750877 seconds
Additional info: PHP version 5.6.2 (MAMP PRO on OSX Yosemite)
As stated by SolarBear, you're actually executing find as often as you have elements in $bluescan.
You should first fetch the list from the server and the iterate over that result set instead, basically exactly the opposite order of your current code.

Slow performance for rabbitmq amqp_login (PHP client)

I'm using php-amqp to read and write from a local rabbitmq server. This is for a high traffic website. Following the example at http://code.google.com/p/php-amqp/, I haven't found a way to avoid calling amqp_login with every web request. The call to amqp_login is, by far, the slowest in the sequence. Is there an easy way to bypass the need to call this with every web request? We're using Apache on SuSE linux.
$time = microtime(true);
$connection = amqp_connection_popen('localhost', 5672);
print "connect: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
amqp_login($connection, 'guest', 'guest', '/');
print "login: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
$channel = 1;
amqp_channel_open($connection, $channel);
print "channel open: ".(microtime(true) - $time) . "\n";
$time = microtime(true);
$exchange = 'amq.fanout';
amqp_basic_publish($connection, $channel, $exchange, $routing_key, 'junk', $mandatory = false, $immediate = false, array());
print "publish: ".(microtime(true) - $time) . "\n";
Example Results:
connect: 0.00019311904907227
login: 0.041217088699341
channel open: 0.00034213066101074
publish: 5.6028366088867E-5
Is there an easy way to bypass the need to call this with every web request?
Not specifically familiar with this package, however....
It's suprising that it allows persistent connection, but not persistent authentication - one way to fix this would be to key the connection pool based on server+port+username+password rather than just the server+port, which wouldn't require too much change to the code.
If you don't feel like modifying the C code / arguing with the maintainers, then you could run a proxy daemon which maintains an authenticated connection.

Categories