Network Monitoring - php

Image : http://i40.tinypic.com/2hodx55.png
I have built a Network Interface Monitor using Php and SNMP , but now when i execute it on localhost i see my graph goes to origin(0) again and again (Please see the image) and also the speed on Y axis is wrong. At times it goes in Millons and Millions.
please can anyone tell me what is the problem in the code below
<?php
$int="wlan0";
session_start();
$rx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
sleep(5);
$rx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
$rx0 = substr($rx0, 11);
$tx0 = substr($tx0, 11);
$rx1 = substr($rx1, 11);
$tx1 = substr($tx1, 11);
$tBps = $tx1 - $tx0;
$rBps = $rx1 - $rx0;
$round_rx=$rBps;
$round_tx=$tBps;
$time=date("U")."000";
$_SESSION['rx'][] = "[$time, $round_rx]";
$_SESSION['tx'][] = "[$time, $round_tx]";
$data['label'] = $int;
$data['data'] = $_SESSION['rx'];
if (count($_SESSION['rx'])>60)
{
$x = min(array_keys($_SESSION['rx']));
unset($_SESSION['rx'][$x]);
}
echo '{"label":"'.$int.'","data":['.implode($_SESSION['rx'], ",").']}';
?>

What you are seeing here is a classic case of polling a counter faster than its refresh interval. It is often the case that counters (in this case, interface counters) are updated every few seconds (10-15 seconds is a common value).
If the counter updates every 15 seconds, and you ask for data every 5 seconds, then you will receive the same value once or twice in a row (depending on latency, processing time, etc.). If you receive the same value twice, then you will see a zero value for the delta (which is what your image shows).
There are two ways to get around this:
Ask for data less frequently than the counters are updated (30-second polling usually works fine). Obviously, if you can find out the exact refresh interval, then you can use that.
Modify the configuration of your equipment to refresh its counters faster. Sometimes this is possible, sometimes it is not; it just depends on the manufacturer, the software, and what has been implemented.
For Net-SNMP "snmpd" daemons, you can walk NET-SNMP-AGENT-MIB::nsCacheTable (1.3.6.1.4.1.8072.1.5.3) for more information about its internal caching of counters.
For example:
snmpwalk -v2c -cpublic localhost 1.3.6.1.4.1.8072.1.5.3 | grep .1.3.6.1.2.1.2.2
NET-SNMP-AGENT-MIB::nsCacheTimeout.1.3.6.1.2.1.2.2 = INTEGER: 3
NET-SNMP-AGENT-MIB::nsCacheStatus.1.3.6.1.2.1.2.2 = INTEGER: cached(4)
Here, you can see that my particular box is caching IF-MIB::ifTable (.1.3.6.1.2.1.2.2), which is the table that you're using, every three seconds. In my case, I would not ask for data any more often than every three seconds. NET-SNMP-AGENT-MIB::nsCacheTimeout (.1.3.6.1.4.1.8072.1.5.3.1.2) is marked as read-write, so you might be able to issue an a "set" command to change the caching duration.

Related

PHP, Add gmdate to gmdate

I have a code that looks like this:
$dagtidhelg = gmdate('H:i', $diffMorning) . "\n";
$kvallstidhelg = gmdate('H:i', $diffNight);
This code runs several times per page since its runt every time a row is loaded from mysql.
It can return a time value ie 08:15 and 09:30. This is the lenght of two work sessions.
That works great but now Im stuck, I want to display the total of every work session at the bottom. I have tried this:
$dagtidhelgtotal = $dagtidhelgtotal + $dagtidhelg;
$kvalltidhelgtotal = $kvalltidhelgtotal + $kvallstidhelg;
But that only adds the hours togheter, it wont even display the :
So Im guessing that Im doing this totaly wrong.
How can I add these times togheter? Maybe convert them to minutes, then add them all togheter?
Add duration together is simple .But you must keep in mind that duration and date are two things completely different. You can write 100:08 for duration but not for date. If your purpose is to keep a duration counter on(one or) every page(s) you need to build a system based on $_SESSION variable.
To add two duree you can proceed like this:
function addDuration($duration1,$duration2){
$result=array_map(function($x,$y){return sprintf("%'.02d", $x+$y);},explode(':',$duration1),explode(':',$duration2));
if($result[1]>60){
$result[0]=sprintf("%'.02d",$result[0]+(int)$result[1]/60);
$result[1]=sprintf("%'.02d",$result[1]%60);
}elseif($result[1]==60){
$result[1]="00";
$result[0]=sprintf("%'.02d",$result[0]+1);
}
return join(':',$result);
}
and the usage in your case could be:
$dagtidhelgtotal = addDuration($dagtidhelgtotal,$dagtidhelg);
if we suppose that
$dagtidhelgtotal ==='100:09' && $dagtidhelg === '08:08'
then after the addition
$dagtidhelgtotal will be equal to '108:17';

How to get out of the loop, if the PHP script takes more than 53 seconds to execute

In Short: How to break the code after the code takes more than 53 seconds to execute, something like this:
while (5 < 200)
{
// do some work here, which will take around 1-6 seconds
if (code this loop is running for more than 53 seconds)
break;
}
If you want to know, why i want to do this:
Ok, this is What I am doing: I am copying data from 7 JSON pages on the web and inserting them into my MySQL database.
The code is something like this:
$start_time = microtime(TRUE); // Helps to count PHP script execution time
connect to database
$number = get inserted number into database
while ($number > ($number+20) )
{
7 Open JSON File link like this - "example.com/$number?api=xxxyyzz"
Use Prepared PDO statements to insert data into MySQL
}
// Code to count PHP script execution time
$end_time = microtime(TRUE);
$time_taken = $end_time - $start_time;
$time_taken = round($time_taken,5);
echo '<p>Page generated in '.$time_taken.' seconds.</p>';
So in my case, It takes around 5.2 seconds to complete one whole loop of adding all data. But some JSON files are empty, so it takes only 1.4 second to complete 1 loop, if they are empty.
So like that, I want to complete millions of loops (add Data from millions of JSON files). So if my code runs for 24/7, it will take me 1 month to complete my task.
But after the code runs for 90 seconds, i get this error:
I am using a CRON job to do the task. And looks like server gives the same error to CRON job.
So, I want to do the CRON job to run every 1 minute, so I do not get timed out error.
But I am afraid of this: What If the script added data in half rows and 1 mintue gets over, and it do not add data into other half rows. Then after the starting on the new minute, the code start from the next $number.
So, If i can break; out of the loop after 53 seconds (If the code starts another loop at 52 seconds, then break at the end of it, that will be around 58-59 seconds).
I mean, i will put the break; code just before the loop end (before }). So i do not exit the loop, while the data got inserted into half of the rows.
I guess that your PHP's max_execution_time is equal to 90 seconds, you can specify max_execution_time by set_time_limit, but I don't think it is a good approach for this.
Have a try pcntl or pthreads, it would save you a lot of time.

Sphinx How can i keep connection active even if no activity for longer time?

I was doing bulk inserts in the RealTime Index using PHP and by Disabling AUTOCOMIT ,
e.g.
// sphinx connection
$sphinxql = mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','');
//do some other time consuming work
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do 50k updates or inserts
// Commit transaction
mysqli_commit($sphinxql);
and kept the script running overnight, in the morning i saw
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate
212334 bytes) in
so when i checked the nohup.out file closely , i noticed , these lines ,
PHP Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
memory usage before these lines was normal , but memory usage after these lines started to increase, and it hit the php mem_limit and gave PHP Fatal error and died.
in script.php , line 502 is
mysqli_query($sphinxql,$update_query_sphinx);
so my guess is, sphinx server closed/died after few hours/ minutes of inactivity.
i have tried setting in sphinx.conf
client_timeout = 3600
Restarted the searchd by
systemctl restart searchd
and still i am facing same issue.
So how can i not make sphinx server die on me ,when no activity is present for longer time ?
more info added -
i am getting data from mysql in 50k chunks at a time and doing while loop to fetch each row and update it in sphinx RT index. like this
//6mil rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
$subset_count = 50000 ;
$total_count_query = "SELECT COUNT(*) as total_count FROM content WHERE enabled = '1'" ;
$total_count = mysqli_query ($conn,$total_count_query);
$total_count = mysqli_fetch_assoc($total_count);
$total_count = $total_count['total_count'];
$current_count = 0;
while ($current_count <= $total_count){
$get_mysql_data_query = "SELECT record_num, views , comments, votes FROM content WHERE enabled = 1 ORDER BY record_num ASC LIMIT $current_count , $subset_count ";
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
if ($result = mysqli_query($conn, $get_mysql_data_query)) {
/* fetch associative array */
while ($row = mysqli_fetch_assoc($result)) {
//sphinx escape whole array
$escaped_sphinx = mysqli_real_escape_array($sphinxql,$row);
//update data in sphinx index
$update_query_sphinx = "UPDATE $sphinx_index
SET
views = ".$escaped_sphinx['views']." ,
comments = ".$escaped_sphinx['comments']." ,
votes = ".$escaped_sphinx['votes']."
WHERE
id = ".$escaped_sphinx['record_num']." ";
mysqli_query ($sphinxql,$update_query_sphinx);
}
/* free result set */
mysqli_free_result($result);
}
// Commit transaction
mysqli_commit($sphinxql);
$current_count = $current_count + $subset_count ;
}
So there are a couple of issues here, both related to running big processes.
MySQL server has gone away - This usually means that MySQL has timed out, but it could also mean that the MySQL process crashed due to running out of memory. In short, it means that MySQL has stopped responding, and didn't tell the client why (i.e. no direct query error). Seeing as you said that you're running 50k updates in a single transaction, it's likely that MySQL just ran out of memory.
Allowed memory size of 134217728 bytes exhausted - means that PHP ran out of memory. This also leads credence to the idea that MySQL ran out of memory.
So what to do about this?
The initial stop-gap solution is to increase memory limits for PHP and MySQL. That's not really solving the root cause, and depending on t he amount of control you have (and knowledge you have) of your deployment stack, it may not be possible.
As a few people mentioned, batching the process may help. It's hard to say the best way to do this without knowing the actual problem that you're working on solving. If you can calculate, say, 10000 or 20000 records instad of 50000 in a batch that may solve your problems. If that's going to take too long in a single process, you could also look into using a message queue (RabbitMQ is a good one that I've used on a number of projects), so that you can run multiple processes at the same time processing smaller batches.
If you're doing something that requires knowledge of all 6 million+ records to perform the calculation, you could potentially split the process up into a number of smaller steps, cache the work done "to date" (as such), and then pick up the next step in the next process. How to do this cleanly is difficult (again, something like RabbitMQ could simplify that by firing an event when each process is finished, so that the next one can start up).
So, in short, there are your best two options:
Throw more resources/memory at the problem everywhere that you can
Break the problem down into smaller, self contained chunks.
You need to reconnect or restart the DB session just before mysqli_begin_transaction($sphinxql)
something like this.
<?php
//reconnect to spinx if it is disconnected due to timeout or whatever , or force reconnect
function sphinxReconnect($force = false) {
global $sphinxql_host;
global $sphinxql_port;
global $sphinxql;
if($force){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}else{
if(!mysqli_ping($sphinxql)){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}
}
}
//10mil+ rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
//reconnect to sphinx
sphinxReconnect(true);
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do your otherstuff
// Commit transaction
mysqli_commit($sphinxql);

How can I check if $max_viewers is greater than $current_viewers?

I have a script that updates the information of various livestream channels while they are active. I want to check if $max_viewers is greater than $current_viewers.
In this case I do not need to take action, however if $current_viewers is larger I want to update the max_viewers field in the database.
I have tried several ways and methods from research, but my PHP is limited and self taught and I think I am misunderstanding the outcome of my statements.
I have tried:
$current_viewers > $max_viewers
$current_viewers >= $max_viewers
But these seem to always update the max_viewer count, or never if reversed. Hence I think I am misunderstanding how these work and what they return.
You should have something like:
$max_viewers = 4;
$current_viewers = 2;
if ($current_viewers > $max_viewers) {
// Current viewers has exceeded the maximum
echo 'exceeded';
} else {
// Current viewers is either less than or equal to the maximum
echo 'not exceeded';
}
In the above example, not exceeded would be shown. For your example, you probably want to replace echo 'exceeded'; with your call to update the database record.

Php query MYSQL very slow. what possible to cause it?

I have a php page query mysql database, it will return about 20000 rows. However the browser will take above 20 minutes to present. I have added index on my database and it do used it, the query time in command line is about 1 second for 20000 rows. but in web application, it takes long. is anyone know which causing this problem? and better way to improve it?Below is my php code to retrieve the data:
select * from table where Date between '2010-01-01' and '2010-12-31'
$result1 = mysql_query($query1) or die('Query failed: ' . mysql_error());
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
$Data['Date'] = $line['Date'];
$Data['Time'] = $line['Time'];
$Data['Serial_No'] = $line['Serial_No'];
$Data['Department'] = $line['Department'];
$Data['Team'] = $line['Team'];
foreach ($Data as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
};
echo "\t\t</tr>\n";
}
Try adding an index to your date column.
Also, it's a good idea to learn about the EXPLAIN command.
As mentioned in the comments above, 1 second is still pretty long for your results.
You might consider putting all your output into a single variable and then echoing the variable once the loop is complete.
Also, browsers wait for tables to be completely formed before showing them, so that will slow your results (at least slow the process of building the results in the browser). A list may work better - or better yet a paged view if possible (as recommended in other answers).
It's not PHP that's causing it to be slow, but the browser itself rendering a huge page. Why do you have to display all that data anyway? You should paginate the results instead.
Try constructing a static HTML page with 20,000 table elements. You'll see how slow it is.
You can also improve that code:
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
foreach ($line as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
flush(); // optional, but gives your program a sense of responsiveness
}
echo "\t\t</tr>\n";
}
In addition, you should increase your acceptance rate.
You could time any steps of the script, by echoing the time before and after connecting to the database, running the query and outputting the code.
This will tell you how long the different steps will take. You may find out that it is indeed the traffic causing the delay and not the query.
On the other hand, when you got a table with millions of records, retreiving 20000 of them can take a long time, even when it is indexed. 20 minutes is extreme, though...

Categories