Is this normal memory behaviour? - php

i have a php script that processes large XML files, and saves data from them into a database. I use several classes, to process, and store the data in the PHP script, before saving to the database, and read in the XML, node by node, to preserve memory. Basicaly, a loop in my file looks like this:
while ($Reader->read()) {
$parsed++;
if (time() >= $nexttick) {
$current=microtime(true)-$ses_start;
$eta=(($this->NumberOfAds-$parsed)*$current)/$parsed;
$nexttick=time()+3;
$mem_usage=memory_get_usage();
echo "Parsed $parsed # $current secs \t | ";
echo " (mem_usage: " . $mem_usage . " \t | ";
echo "ETA: $eta secs\n";
}
$node=$Reader->getNode();
// OMMITED PART: $node is an array, I make some processing, and check if everything exists in the array that I need in the following section
$Ad=new Ad($node); // creating an Ad object from the node
// OMMITED PART: Making some additional SQL queries, to check the integrity of the data, before uploading it to the database
if (!$Ad->update()) {
//add wasn't inserted succesfully, saving a row in a second database table, to log this information
} else {
//add succesfully inserted, saving a row in a second database table, to log this information
}
}
You notice, that the first part of the loop, is a little output tool, that outputs the progress of the file, every 3 seconds, and also outputs the memory usage of the script. I need that because I ran into a memory problem, the last time I was trying to upload a file, and wanted to figure out, what's eating away the memory.
The output of this script looked something like this when I ran it:
Parsed 15 # 2.0869598388672 secs | (mem_usage: 1569552 | ETA: 1389.2195994059 secs
Parsed 30 # 5.2812438011169 secs | (mem_usage: 1903632 | ETA: 1755.1333565712 secs
Parsed 38 # 8.4330480098724 secs | (mem_usage: 2077744 | ETA: 2210.7901124829 secs
Parsed 49 # 11.377414941788 secs | (mem_usage: 2428624 | ETA: 2310.5440017496 secs
Parsed 59 # 14.204828023911 secs | (mem_usage: 2649136 | ETA: 2393.3931421304 secs
Parsed 69 # 17.032008886337 secs | (mem_usage: 2831408 | ETA: 2451.3750760901 secs
Parsed 79 # 20.359696865082 secs | (mem_usage: 2968656 | ETA: 2556.8171214997 secs
Parsed 87 # 23.053930997849 secs | (mem_usage: 3102360 | ETA: 2626.8231951916 secs
Parsed 98 # 26.148546934128 secs | (mem_usage: 3285096 | ETA: 2642.0705279769 secs
Parsed 107 # 29.092607021332 secs | (mem_usage: 3431944 | ETA: 2689.8426286172 secs
Now, I know for certainty, that in my MySQL object, I have a runtime cache, which stores the results of some basic select queries in an array, for quick access later. This is the only variable in the script (that I know of), which increases in size throughout the whole script, so I tried turning of this option. The memory usage dropped, but only by a tiny bit, and it was still rising throughout the whole script.
My questions are the following:
Is the slow rising of the memory usage throughout a long running script a normal behaviour in php, or I should search through the whole code, and try to find out what is eating up my memory?
I know that by using unset() on a variable, I can free up the space it takes away from the memory, but do I need to use unset() even if I am overwriting the same variable throughout the whole file?
A slight rephrasing of my second question with an example:
Are the following two code blocks produce the same result regarding memory usage, or if not, which one is more optimal?
BLOCK1
$var = str_repeat("Hello", 4242);
$var = str_repeat("Good bye", 4242);
BLOCK2
$var = str_repeat("Hello", 4242);
unset($var);
$var = str_repeat("Good bye", 4242);

If you install the xdebug module on your development machine, you can get it to do a function trace - and that will show memory usage for each line - https://xdebug.org/docs/execution_trace
That'll probably help you identify where the space is going up - you can then try to unset($var) etc and see if it makes any difference.

Related

How to get out of the loop, if the PHP script takes more than 53 seconds to execute

In Short: How to break the code after the code takes more than 53 seconds to execute, something like this:
while (5 < 200)
{
// do some work here, which will take around 1-6 seconds
if (code this loop is running for more than 53 seconds)
break;
}
If you want to know, why i want to do this:
Ok, this is What I am doing: I am copying data from 7 JSON pages on the web and inserting them into my MySQL database.
The code is something like this:
$start_time = microtime(TRUE); // Helps to count PHP script execution time
connect to database
$number = get inserted number into database
while ($number > ($number+20) )
{
7 Open JSON File link like this - "example.com/$number?api=xxxyyzz"
Use Prepared PDO statements to insert data into MySQL
}
// Code to count PHP script execution time
$end_time = microtime(TRUE);
$time_taken = $end_time - $start_time;
$time_taken = round($time_taken,5);
echo '<p>Page generated in '.$time_taken.' seconds.</p>';
So in my case, It takes around 5.2 seconds to complete one whole loop of adding all data. But some JSON files are empty, so it takes only 1.4 second to complete 1 loop, if they are empty.
So like that, I want to complete millions of loops (add Data from millions of JSON files). So if my code runs for 24/7, it will take me 1 month to complete my task.
But after the code runs for 90 seconds, i get this error:
I am using a CRON job to do the task. And looks like server gives the same error to CRON job.
So, I want to do the CRON job to run every 1 minute, so I do not get timed out error.
But I am afraid of this: What If the script added data in half rows and 1 mintue gets over, and it do not add data into other half rows. Then after the starting on the new minute, the code start from the next $number.
So, If i can break; out of the loop after 53 seconds (If the code starts another loop at 52 seconds, then break at the end of it, that will be around 58-59 seconds).
I mean, i will put the break; code just before the loop end (before }). So i do not exit the loop, while the data got inserted into half of the rows.
I guess that your PHP's max_execution_time is equal to 90 seconds, you can specify max_execution_time by set_time_limit, but I don't think it is a good approach for this.
Have a try pcntl or pthreads, it would save you a lot of time.

PHP - Optimising preg_match of thousands of patterns

So I wrote a script to extract data from raw genome files, heres what the raw genome file looks like:
# rsid chromosome position genotype
rs4477212 1 82154 AA
rs3094315 1 752566 AG
rs3131972 1 752721 AG
rs12124819 1 776546 AA
rs11240777 1 798959 AG
rs6681049 1 800007 CC
rs4970383 1 838555 AC
rs4475691 1 846808 CT
rs7537756 1 854250 AG
rs13302982 1 861808 GG
rs1110052 1 873558 TT
rs2272756 1 882033 GG
rs3748597 1 888659 CT
rs13303106 1 891945 AA
rs28415373 1 893981 CC
rs13303010 1 894573 GG
rs6696281 1 903104 CT
rs28391282 1 904165 GG
rs2340592 1 910935 GG
The raw text file has hundreds of thousands of these rows, but I only need specific ones, I need about 10,000 of them. I have a list of rsids. I just need the genotype from each line. So I loop through the rsid list and use preg_match to find the line I need:
$rawData = file_get_contents('genome_file.txt');
$rsids = $this->get_snps();
while ($row = $rsids->fetch_assoc()) {
$searchPattern = "~rs{$row['rsid']}\t(.*?)\t(.*?)\t(.*?)\n~i";
if (preg_match($searchPattern,$rawData,$matchedGene)) {
$genotype = $matchedGene[3]);
// Do something with genotype
}
}
NOTE: I stripped out a lot of code to just show the regexp extraction I'm doing. I'm also inserting each row into a database as I go along. Heres the code with the database work included:
$rawData = file_get_contents('genome_file.txt');
$rsids = $this->get_snps();
$query = "INSERT INTO wp_genomics_results (file_id,snp_id,genotype,reputation,zygosity) VALUES (?,?,?,?,?)";
$stmt = $ngdb->prepare($query);
$stmt->bind_param("iissi", $file_id,$snp_id,$genotype,$reputation,$zygosity);
$ngdb->query("START TRANSACTION");
while ($row = $rsids->fetch_assoc()) {
$searchPattern = "~rs{$row['rsid']}\t(.*?)\t(.*?)\t(.*?)\n~i";
if (preg_match($searchPattern,$rawData,$matchedGene)) {
$genotype = $matchedGene[3]);
$stmt->execute();
$insert++;
}
}
$stmt->close();
$ngdb->query("COMMIT");
$snps->free();
$ngdb->close();
}
So unfortunately my script runs very slowly. Running 50 iterations takes 17 seconds. So you can imagine how long running 18,000 iterations is gonna take. I'm looking into ways to optimise this.
Is there a faster way to extract the data I need from this huge text file? What if I explode it into an array of lines, and use preg_grep(), would that be any faster?
Something I tried is combining all 18,000 rsids into a single expression (i.e. (rs123|rs124|rs125) like this:
$rsids = get_rsids();
$rsid_group = implode('|',$rsids);
$pattern = "~({$rsid_group })\t(.*?)\t(.*?)\t(.*?)\n~i";
preg_match($pattern,$rawData,$matches);
But unfortunately it gave me some error message about exceeding the PCRE expression limit. The needle was way too big. Another thing I tried is adding the S modifier to the expression. I read that this analyses the pattern in order to increase performance. It didn't speed things up at all. Maybe maybe pattern isn't compatible with it?
So then the second thing I need to try and optimise is the database inserts. I added a transaction hoping that would speed things up but it didn't speed it up at all. So I'm thinking maybe I should group the inserts together, so that I insert multiple rows at once, rather than inserting them individually.
Then another idea is something I read about, using LOAD DATA INFILE to load rows from a text file. In that case, I just need to generate a text file first. Would it work out faster to generate a text file in this case I wonder.
EDIT: It seems like whats taking up most time is the regular expressions. Running that part of the program by itself, it takes a really long time. 10 rows takes 4 seconds.
This is slow because you're searching a vast array of data over and over again.
It looks like you have a text file, not a dbms table, containing lines like these:
rs4477212 1 82154 AA
rs3094315 1 752566 AG
rs3131972 1 752721 AG
rs12124819 1 776546 AA
It looks like you have some other data structure containing a list of values like rs4477212. I think that's already in a table in the dbms.
I think you want exact matches for the rsxxxx values, not prefix or partial matches.
I think you want to process many different files of raw data, and extract the same batch of rsxxxx values from each of them.
So, here's what you do, in pseudocode. Don't load the whole raw data file into memory, rather process it line by line.
Read your rows of rsid values from the dbms, just once, and store them in an associative array.
for each file of raw data....
for each line of data in the file...
split the line of data to obtain the rsid. In php, $array = explode(" ", $line, 2); will yield your rsid in $array[0], and do it fast.
Look in your array of rsid values for this value. In php, if ( array_key_exists( $array[0], $rsid_array )) { ... will do this.
If the key does exist, you have a match.
extract the last column from the raw text line ('GC or whatever)
write it to your dbms.
Notice how this avoids regular expressions, and how it processes your raw data line by line. You only have to touch each line of raw data once. That's good, because your raw data is also your largest quantity of data. It exploits php's associative array feature to do the matching. All that will be much faster than your method.
To speed the process of inserting tens of thousands of rows into a table, read this. Optimizing InnoDB Insert Queries
+1 to #Ollie Jones' answer. He posted while I was working on my answer. So here's some code to get you started.
$rsids = $this->get_snps();
while ($row = $rsids->fetch_assoc()) {
$key = 'rs' . $row['rsid'];
$rsidHash[$key] = true;
}
$rawDataFd = fopen('genome_file.txt', 'r');
while ($rawData = fgetcsv($rawDataFd, 80, "\t")) {
if (array_key_exists($rawData[0], $rsidHash)) {
$genotype = $rawData[3];
// do something with genotype
}
}
I wanted to give the LOAD DATA INFILE approach to see how well that works, so I came up with what I thought is a nice elegant approach, heres the code:
$file = 'C:/wamp/www/nutri/wp-content/plugins/genomics/genome/test';
$data_query = "
LOAD DATA LOCAL INFILE '$file'
INTO TABLE wp_genomics_results
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 18 ROWS
(#rsid,#chromosome,#locus,#genotype)
SET file_id = '$file_id',
snp_id = (SELECT id FROM wp_pods_snp WHERE rsid = SUBSTR(#rsid,2)),
genotype = #genotype
";
$ngdb->query($data_query);
I put a foreign key restraint on the snp_id (thats the ID for my table of RSIDs) column so that it only enters genotypes for rsids that I need. Unfortunately this foreign key restraint caused some kind of error which locked the tables. Ah well. It might not have been a good approach anyhow since there are on average 200,000 rows in each of these genome files. I'll go with Ollie Jones approach since that seems to be the most effective and viable approach I've come across.

mysqli_fetch_assoc() performance PHP5.4 vs PHP7.0

I have large MySQL query (1.8M rows, 25 columns) and I need to make 2 dimensional array from it (memory table based on primary key).
Code works as expected, but $table creation takes a long time in PHP7.0.
What is the reason why PHP7.0 performs so much worse? My primary interest is in mysqli.
Thank you for any insights - PHP7 would save me much memory if I can fix performance.
mysqli code snippet
$start = microtime(true);
$vysledek = cluster::query("SELECT * FROM `table` WHERE 1");
$query_time = (microtime(true) - $start);
$start_fetch = microtime(true);
while($zaznam = mysqli_fetch_assoc ( $vysledek )){
$fetch_time+= (microtime(true) - $start_fetch);
$start_assign = microtime(true);
$table[$zaznam['prikey']] = $zaznam;
$assign_time+= (microtime(true) - $start_assign);
$start_fetch = microtime(true);
}
$total_time+= (microtime(true) - $start);
echo round($assign_time, 2).' seconds to set the array values\n';
echo round($query_time, 2).' seconds to execute the query\n';
echo round($fetch_time, 2).' seconds to fetch data\n';
echo round($total_time, 2).' seconds to execute whole script\n';
echo "Peak Memory Usage:".round(memory_get_peak_usage(true)/(1024 * 1024), 2)." MB\n";
mysqli results
Deb 7 PHP 5.4 mysqlnd 5.0.10
1.8 seconds to set the array values
8.37 seconds to execute the query
13.49 seconds to fetch data
24.42 seconds to execute whole script
Peak Memory Usage:8426.75 MB
Deb 8 PHP 5.6 mysqlnd 5.0.11-dev
1.7 seconds to set the array values
8.58 seconds to execute the query
12.55 seconds to fetch data
23.6 seconds to execute whole script
Peak Memory Usage: 8426.75 MB
Deb 8 PHP 7.0 mysqlnd 5.0.12-dev
0.73 seconds to set the array values
8.63 seconds to execute the query
126.71 seconds to fetch data
136.46 seconds to execute whole script
Peak Memory Usage:7394.27 MB
Deb 8 PHP 7.0 mysqlnd 5.0.12-dev extended benchmarking
I have extended benchmarking for section fetch to report every 100k lines with following results:
Lines fetched 100000 in 1.87s
Lines fetched 300000 in 5.24s
Lines fetched 500000 in 10.97s
Lines fetched 700000 in 19.17s
Lines fetched 900000 in 29.96s
Lines fetched 1100000 in 43.03s
Lines fetched 1300000 in 58.48s
Lines fetched 1500000 in 76.47s
Lines fetched 1700000 in 96.73s
Lines fetched 1800000 in 107.78s
DEB8 PHP7.1.0-dev libclient 5.5.50
1.56 seconds to set the array values
8.38 seconds to execute the query
456.52 seconds to fetch data
467.68 seconds to execute whole script
Peak Memory Usage:8916 MB
DEB8 PHP7.1.0-dev libclient 5.5.50 extended benchmarking
Lines fetched 100000 in 2.72s
Lines fetched 300000 in 15.7s
Lines fetched 500000 in 38.7s
Lines fetched 700000 in 71.69s
Lines fetched 900000 in 114.8s
Lines fetched 1100000 in 168.18s
Lines fetched 1300000 in 231.69s
Lines fetched 1500000 in 305.36s
Lines fetched 1700000 in 389.05s
Lines fetched 1800000 in 434.71s
DEB8 PHP7.1.0-dev mysqlnd 5.0.12-dev
1.51 seconds to set the array values
9.16 seconds to execute the query
261.72 seconds to fetch data
273.61 seconds to execute whole script
Peak Memory Usage:8984.27 MB
DEB8 PHP7.1.0-dev mysqlnd 5.0.12-dev extended benchmarking
Lines fetched 100000 in 3.3s
Lines fetched 300000 in 13.63s
Lines fetched 500000 in 29.02s
Lines fetched 700000 in 49.21s
Lines fetched 900000 in 74.56s
Lines fetched 1100000 in 104.97s
Lines fetched 1300000 in 140.03s
Lines fetched 1500000 in 180.42s
Lines fetched 1700000 in 225.72s
Lines fetched 1800000 in 250.01s
PDO code snippet
$start = microtime(true);
$sql = "SELECT * FROM `table` WHERE 1";
$vysledek = $dbh->query($sql, PDO::FETCH_ASSOC);
$query_time = (microtime(true) - $start);
$start_fetch = microtime(true);
foreach($vysledek as $zaznam){
$fetch_time+= (microtime(true) - $start_fetch);
$start_assign = microtime(true);
$table[$zaznam['prikey']] = $zaznam;
$assign_time+= (microtime(true) - $start_assign);
$start_fetch = microtime(true);
}
$total_time+= (microtime(true) - $start);
echo round($assign_time, 2).' seconds to set the array values\n';
echo round($query_time, 2).' seconds to execute the query\n';
echo round($fetch_time, 2).' seconds to fetch data\n';
echo round($total_time, 2).' seconds to execute whole script\n';
echo "Peak Memory Usage:".round(memory_get_peak_usage(true)/(1024 * 1024), 2)." MB\n";
PDO Results
Deb 7 PHP 5.4 mysqlnd 5.0.10
1.85 seconds to set the array values
12.51 seconds to execute the query
16.75 seconds to fetch data
31.82 seconds to execute whole script
Peak Memory Usage:11417.5 MB
Deb 8 PHP 5.6 mysqlnd 5.0.11-dev
1.75 seconds to set the array values
12.16 seconds to execute the query
15.72 seconds to fetch data
30.39 seconds to execute whole script
Peak Memory Usage:11417.75 MB
Deb 8 PHP 7.0 mysqlnd 5.0.12-dev
0.71 seconds to set the array values
35.93 seconds to execute the query
114.16 seconds to fetch data
151.19 seconds to execute whole script
Peak Memory Usage:6620.29 MB
Baseline comparison code
$start_query = microtime(true);
exec("mysql --user=foo --host=1.2.3.4 --password=bar -e'SELECT * FROM `profile`.`table`' > /tmp/out.csv");
$query_time = (microtime(true) - $start_query);
echo round($query_time, 2).' seconds to execute the query \n';
Execution time is similar for all systems at 19 seconds +-1 second variation.
Based on above observations I would say that PHP 5.X is reasonable as there is a bit more work executed than just dumping to the file.
all 3 servers are on same host (source and both test servers)
tests are consistent when repeated
there is already similar variable in memory ,I need to do it for comparison removed for testing, is not related to the problem
CPU is at 100% whole time
Both servers have 32G RAM and swappiness set to 1, goal is to perform it as memory operation
test server is dedicated, there is nothing else running
php.ini changed between major versions but all options relating to mysqli/PDO seems to be the same
Deb8 machine was downgraded to PHP5.6 and issue disappeared, after reinstalling PHP7 its back
Reported a bug at php.net - ID 72736 since I belive that it was proven that problem is in PHP and not in the system or any other configuration
Edit 1 : Added PDO Comparison
Edit 2 : Added benchmarking markers, edited PDO results as there was benchmarking error
Edit 3 : Major cleanup in original question, rebuild of code snipets for better indication of the error
Edit 4 : added point about Downgrade and upgrade of PHP
Edit 5 : added extended benchmarking for DEB8 PHP7.0
Edit 6 : included php7 config
Edit 7 : performance measurement for PHP 7.1 dev with both libraries- compiled with configs from bishop, removed my php-config
Edit 8 : added comparison against CLI command, minor clean-ups
For cross-reference: With the release of PHP 7.1 on 1st Dec 2016 this issue should be resolved (in PHP 7.1).
PHP 7.0: Even in the ticket it's written that PHP-7.0 has been patched, I've not yet seen in the recent change-log (7.0.13 on 10 Nov 2016, since patch incorporation date) that this is part of the current PHP 7.0.x release. Perhaps with the next release.
The bug is tracked upstream (thanks to OP's report): Bug #72736 - Slow performance when fetching large dataset with mysqli / PDO (bugs.php.net; Aug 2016).
As the problem appears to be in the fetch (not the array creation), and we know the driver is running mysqlnd (which is a driver library independently written by the PHP team, not provided by MySQL AB aka Oracle), then recompiling PHP using libmysqlclient (which is the MySQL AB aka Oracle provided interface) may improve the situation (or at least narrow the problem space).
First thing I'd suggest is writing a small script that can be run from the CLI that demonstrates the problem. This will help to eliminate any other variables (web server modules, opcache, etc).
Then, I'd suggest rebuilding PHP with libmysqlclient to see if performance improves. Quick guide to rebuilding PHP (for the technically competent):
Download the source for the PHP version you want
Decompress and go into the PHP code directory
Run ./buildconf
Run ./configure --prefix=/usr --with-config-file-path=/etc/php5/apache2 --with-config-file-scan-dir=/etc/php5/apache2/conf.d --build=x86_64-linux-gnu --host=x86_64-linux-gnu --sysconfdir=/etc --localstatedir=/var --mandir=/usr/share/man --enable-debug --disable-static --with-pic --with-layout=GNU --with-pear=/usr/share/php --with-libxml-dir=/usr --with-mysql-sock=/var/run/mysqld/mysqld.sock --enable-dtrace --without-mm --with-mysql=shared,/usr --with-mysqli=shared,/usr/bin/mysql_config --enable-pdo=shared --without-pdo-dblib --with-pdo-mysql=shared,/usr CFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -O2 -Wall -fsigned-char -fno-strict-aliasing -g" LDFLAGS="-Wl,-z,relro" CPPFLAGS="-D_FORTIFY_SOURCE=2" CXXFLAGS="-g -O2 -fstack-protector-strong -Wformat -Werror=format-security"
Run make && make test
Walk away
Run sapi/cli/php -i and confirm the version and presence of libmysqlclient
Run your test again. Any better?

php infinite loop launching different subroutines each x,y,z minutes

I'm looking for a php example / extension to be able to launch the execution of different PHP subroutines at regular period (specific for each routing)
For my smarthome automation, I've a list of sensors defined in a mysql table:
id tinyint(4) Auto increment
nom char(50) NULL
zone_id tinyint(4) NULL
grandeur_id tinyint(4) NULL
adresse_source varchar(255) NULL
polling_time smallint(6) NULL [5]
label_unite varchar(15) NULL
description varchar(255) NULL
Each sensor (url given in field adresse_source) should be read every from 1min to 1day interval and recorder in a data table.
We can have sensor A to be read every minute, sensor B every 5 min etc...
Sensor read procedure is the same for all sensors.
I was planning to create different version of this PHP script for different time period and multiple cron entry, but this is not convenient and heavy to manage
Is there a PHP feature for interrupt management, where in an infinite loop there will be event time trigger launching the read & write of the sensor depending on its polling_time?
For a crude solution, given the timings range (1 minute to 1 day), you could get away with a simple loop.
If your loop runs reliably within a few seconds, you can check the php time (now()) (rounded to the nearest minute - by diving by 60)) (php time is given in seconds as an integer). You can then test the mod() of this value dividing by the relevant interval represented as minutes.
If the result is 0, that event can be triggered.
If the number of sensors isn't to large, it might help to read them in once (before your loop) and stored them in an associative array keyed on the interval.
$sensors=array();
... while(fetch_row ..){
$interval=$row['polling_time'];
$sensors["$interval"][]=$row;
}
...
$prevtime=intval(round(floatval(now()/60),0);
while (true)
{
$thistime=intval(round(floatval(now()/60),0);
if ($thistime == $prevtime)
{
sleep(1);
continue;
}
$prevtime=$thistime;
foreach ($sensors as $interval=>$intsensors)
{
if (($thistime%intval($interval)) == 0)
{ // this interval has been matched to the current time - so get the sensors
foreach ($intsensors as $this_sensor)
{
extract($this_sensor);
// do what you like with this sensor's details.
}
}
}
sleep(10) // or however many seconds you need to make your loop work smoothly
}
... etc
Like I said - crude but may work OK - given your (in computing terms) reasonably large timescales.
Cheers
Interrupt management? This is low level stuff and you don't even say what OS it's running on. Interrupts must be handled by the kernel - which can in theory invoke userspace programs - but it's a very roundabout way of doing things in any language. Your mention of cron makes me believe that you don't really mean interrupts at all.
On a POSIX system you could use the pcntl_signal(), [u]sleep() and pcntl_alarm() to schedule running of code, Actually it's possible to do it just with usleep() and an event queue.
If it were me, I'd use a scheduling engine like Nagios (there are complications with high frequency cron jobs) to invoke the PHP scripts rather than letting PHP act as the controller.
I tried with event php extension, but it execute only once each event and then nothing else.
may be an idea ?
$base = new EventBase();
$delay1 = 2;
$delay2 = 10;
$event1 = Event::timer($base, function($delay1) use (&$event1) {
// subroutine event1
echo "$delay1 seconds elapsed event 1\n";
$event1->delTimer();
}, $delay1);
$event1->addTimer($delay1);
$event2 = Event::timer($base, function($delay2) use (&$event2) {
// subroutine event1
echo "$delay2 seconds elapsed event 2\n";
$event2->delTimer();
}, $delay2);
$event2->addTimer($delay2);
$base->loop();
while (1);

Network Monitoring

Image : http://i40.tinypic.com/2hodx55.png
I have built a Network Interface Monitor using Php and SNMP , but now when i execute it on localhost i see my graph goes to origin(0) again and again (Please see the image) and also the speed on Y axis is wrong. At times it goes in Millons and Millions.
please can anyone tell me what is the problem in the code below
<?php
$int="wlan0";
session_start();
$rx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
sleep(5);
$rx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
$rx0 = substr($rx0, 11);
$tx0 = substr($tx0, 11);
$rx1 = substr($rx1, 11);
$tx1 = substr($tx1, 11);
$tBps = $tx1 - $tx0;
$rBps = $rx1 - $rx0;
$round_rx=$rBps;
$round_tx=$tBps;
$time=date("U")."000";
$_SESSION['rx'][] = "[$time, $round_rx]";
$_SESSION['tx'][] = "[$time, $round_tx]";
$data['label'] = $int;
$data['data'] = $_SESSION['rx'];
if (count($_SESSION['rx'])>60)
{
$x = min(array_keys($_SESSION['rx']));
unset($_SESSION['rx'][$x]);
}
echo '{"label":"'.$int.'","data":['.implode($_SESSION['rx'], ",").']}';
?>
What you are seeing here is a classic case of polling a counter faster than its refresh interval. It is often the case that counters (in this case, interface counters) are updated every few seconds (10-15 seconds is a common value).
If the counter updates every 15 seconds, and you ask for data every 5 seconds, then you will receive the same value once or twice in a row (depending on latency, processing time, etc.). If you receive the same value twice, then you will see a zero value for the delta (which is what your image shows).
There are two ways to get around this:
Ask for data less frequently than the counters are updated (30-second polling usually works fine). Obviously, if you can find out the exact refresh interval, then you can use that.
Modify the configuration of your equipment to refresh its counters faster. Sometimes this is possible, sometimes it is not; it just depends on the manufacturer, the software, and what has been implemented.
For Net-SNMP "snmpd" daemons, you can walk NET-SNMP-AGENT-MIB::nsCacheTable (1.3.6.1.4.1.8072.1.5.3) for more information about its internal caching of counters.
For example:
snmpwalk -v2c -cpublic localhost 1.3.6.1.4.1.8072.1.5.3 | grep .1.3.6.1.2.1.2.2
NET-SNMP-AGENT-MIB::nsCacheTimeout.1.3.6.1.2.1.2.2 = INTEGER: 3
NET-SNMP-AGENT-MIB::nsCacheStatus.1.3.6.1.2.1.2.2 = INTEGER: cached(4)
Here, you can see that my particular box is caching IF-MIB::ifTable (.1.3.6.1.2.1.2.2), which is the table that you're using, every three seconds. In my case, I would not ask for data any more often than every three seconds. NET-SNMP-AGENT-MIB::nsCacheTimeout (.1.3.6.1.4.1.8072.1.5.3.1.2) is marked as read-write, so you might be able to issue an a "set" command to change the caching duration.

Categories