php infinite loop launching different subroutines each x,y,z minutes - php

I'm looking for a php example / extension to be able to launch the execution of different PHP subroutines at regular period (specific for each routing)
For my smarthome automation, I've a list of sensors defined in a mysql table:
id tinyint(4) Auto increment
nom char(50) NULL
zone_id tinyint(4) NULL
grandeur_id tinyint(4) NULL
adresse_source varchar(255) NULL
polling_time smallint(6) NULL [5]
label_unite varchar(15) NULL
description varchar(255) NULL
Each sensor (url given in field adresse_source) should be read every from 1min to 1day interval and recorder in a data table.
We can have sensor A to be read every minute, sensor B every 5 min etc...
Sensor read procedure is the same for all sensors.
I was planning to create different version of this PHP script for different time period and multiple cron entry, but this is not convenient and heavy to manage
Is there a PHP feature for interrupt management, where in an infinite loop there will be event time trigger launching the read & write of the sensor depending on its polling_time?

For a crude solution, given the timings range (1 minute to 1 day), you could get away with a simple loop.
If your loop runs reliably within a few seconds, you can check the php time (now()) (rounded to the nearest minute - by diving by 60)) (php time is given in seconds as an integer). You can then test the mod() of this value dividing by the relevant interval represented as minutes.
If the result is 0, that event can be triggered.
If the number of sensors isn't to large, it might help to read them in once (before your loop) and stored them in an associative array keyed on the interval.
$sensors=array();
... while(fetch_row ..){
$interval=$row['polling_time'];
$sensors["$interval"][]=$row;
}
...
$prevtime=intval(round(floatval(now()/60),0);
while (true)
{
$thistime=intval(round(floatval(now()/60),0);
if ($thistime == $prevtime)
{
sleep(1);
continue;
}
$prevtime=$thistime;
foreach ($sensors as $interval=>$intsensors)
{
if (($thistime%intval($interval)) == 0)
{ // this interval has been matched to the current time - so get the sensors
foreach ($intsensors as $this_sensor)
{
extract($this_sensor);
// do what you like with this sensor's details.
}
}
}
sleep(10) // or however many seconds you need to make your loop work smoothly
}
... etc
Like I said - crude but may work OK - given your (in computing terms) reasonably large timescales.
Cheers

Interrupt management? This is low level stuff and you don't even say what OS it's running on. Interrupts must be handled by the kernel - which can in theory invoke userspace programs - but it's a very roundabout way of doing things in any language. Your mention of cron makes me believe that you don't really mean interrupts at all.
On a POSIX system you could use the pcntl_signal(), [u]sleep() and pcntl_alarm() to schedule running of code, Actually it's possible to do it just with usleep() and an event queue.
If it were me, I'd use a scheduling engine like Nagios (there are complications with high frequency cron jobs) to invoke the PHP scripts rather than letting PHP act as the controller.

I tried with event php extension, but it execute only once each event and then nothing else.
may be an idea ?
$base = new EventBase();
$delay1 = 2;
$delay2 = 10;
$event1 = Event::timer($base, function($delay1) use (&$event1) {
// subroutine event1
echo "$delay1 seconds elapsed event 1\n";
$event1->delTimer();
}, $delay1);
$event1->addTimer($delay1);
$event2 = Event::timer($base, function($delay2) use (&$event2) {
// subroutine event1
echo "$delay2 seconds elapsed event 2\n";
$event2->delTimer();
}, $delay2);
$event2->addTimer($delay2);
$base->loop();
while (1);

Related

How to get web server average load level in percentages using PHP function sys_getloadavg()?

PHP function sys_getloadavg() returns an array with three values showing average number of processes in the system run-queue over the last 1, 5 and 15 minutes, respectively.
How to convert this production to percentages?
Percentages are relative measurement units. This means that we must know a range or minimum and maximum values of the measured quantity. Function sys_getloadavg() evaluates performance of whole system, not separate CPU load level, or usage of memory, file system or database. It returns float numbers showing how many processes were in run-queue for the last interval of time.
I did some experiment with my MacBook Pro (8 CPU cores) and PHP 7.0 to figure out range of values produced by sys_getloadavg(). I've got average figures between 1.3 and 3.2. When I run video conversion program in parallel, the maximum result jumped up to 18.9. By the way, in all cases I didn't fix substantial losses in web page loading speed. It means that whole system was not overloaded.
Let's take for 100% of system load situation when web page don't load for reasonable time, let say 10 sec. I don't know what values will return sys_getloadavg() in this case, but I think it will be something big.
My solution is very simple. Let's measure system average load level and persistently store results as minimum and maximum values. When system works faster or slower we will update min and max by new values. So, our program will 'learn' the system and becomes more and more precise. The value of the last minute will be compared with stored range and converted to percents like (loadavg - min)/((max - min) / 100):
$performance = sys_getloadavg();
try {
$rangeFile = 'sys_load_level.txt';
$range = file($rangeFile, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
$performance = array_merge($performance, $range);
$min = min($performance);
$max = max($performance);
if ($range[0] > $min || $range[1] < $max)
file_put_contents($rangeFile, [$min.PHP_EOL, $max.PHP_EOL]);
}
catch (Exception $e) {
$min = min($performance);
$max = max($performance);
file_put_contents($rangeFile, [$min.PHP_EOL, $max.PHP_EOL]);
}
$level = intval(($performance[0] - $min) / (($max - $min) / 100.0));

How to get out of the loop, if the PHP script takes more than 53 seconds to execute

In Short: How to break the code after the code takes more than 53 seconds to execute, something like this:
while (5 < 200)
{
// do some work here, which will take around 1-6 seconds
if (code this loop is running for more than 53 seconds)
break;
}
If you want to know, why i want to do this:
Ok, this is What I am doing: I am copying data from 7 JSON pages on the web and inserting them into my MySQL database.
The code is something like this:
$start_time = microtime(TRUE); // Helps to count PHP script execution time
connect to database
$number = get inserted number into database
while ($number > ($number+20) )
{
7 Open JSON File link like this - "example.com/$number?api=xxxyyzz"
Use Prepared PDO statements to insert data into MySQL
}
// Code to count PHP script execution time
$end_time = microtime(TRUE);
$time_taken = $end_time - $start_time;
$time_taken = round($time_taken,5);
echo '<p>Page generated in '.$time_taken.' seconds.</p>';
So in my case, It takes around 5.2 seconds to complete one whole loop of adding all data. But some JSON files are empty, so it takes only 1.4 second to complete 1 loop, if they are empty.
So like that, I want to complete millions of loops (add Data from millions of JSON files). So if my code runs for 24/7, it will take me 1 month to complete my task.
But after the code runs for 90 seconds, i get this error:
I am using a CRON job to do the task. And looks like server gives the same error to CRON job.
So, I want to do the CRON job to run every 1 minute, so I do not get timed out error.
But I am afraid of this: What If the script added data in half rows and 1 mintue gets over, and it do not add data into other half rows. Then after the starting on the new minute, the code start from the next $number.
So, If i can break; out of the loop after 53 seconds (If the code starts another loop at 52 seconds, then break at the end of it, that will be around 58-59 seconds).
I mean, i will put the break; code just before the loop end (before }). So i do not exit the loop, while the data got inserted into half of the rows.
I guess that your PHP's max_execution_time is equal to 90 seconds, you can specify max_execution_time by set_time_limit, but I don't think it is a good approach for this.
Have a try pcntl or pthreads, it would save you a lot of time.

Network Monitoring

Image : http://i40.tinypic.com/2hodx55.png
I have built a Network Interface Monitor using Php and SNMP , but now when i execute it on localhost i see my graph goes to origin(0) again and again (Please see the image) and also the speed on Y axis is wrong. At times it goes in Millons and Millions.
please can anyone tell me what is the problem in the code below
<?php
$int="wlan0";
session_start();
$rx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
sleep(5);
$rx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
$rx0 = substr($rx0, 11);
$tx0 = substr($tx0, 11);
$rx1 = substr($rx1, 11);
$tx1 = substr($tx1, 11);
$tBps = $tx1 - $tx0;
$rBps = $rx1 - $rx0;
$round_rx=$rBps;
$round_tx=$tBps;
$time=date("U")."000";
$_SESSION['rx'][] = "[$time, $round_rx]";
$_SESSION['tx'][] = "[$time, $round_tx]";
$data['label'] = $int;
$data['data'] = $_SESSION['rx'];
if (count($_SESSION['rx'])>60)
{
$x = min(array_keys($_SESSION['rx']));
unset($_SESSION['rx'][$x]);
}
echo '{"label":"'.$int.'","data":['.implode($_SESSION['rx'], ",").']}';
?>
What you are seeing here is a classic case of polling a counter faster than its refresh interval. It is often the case that counters (in this case, interface counters) are updated every few seconds (10-15 seconds is a common value).
If the counter updates every 15 seconds, and you ask for data every 5 seconds, then you will receive the same value once or twice in a row (depending on latency, processing time, etc.). If you receive the same value twice, then you will see a zero value for the delta (which is what your image shows).
There are two ways to get around this:
Ask for data less frequently than the counters are updated (30-second polling usually works fine). Obviously, if you can find out the exact refresh interval, then you can use that.
Modify the configuration of your equipment to refresh its counters faster. Sometimes this is possible, sometimes it is not; it just depends on the manufacturer, the software, and what has been implemented.
For Net-SNMP "snmpd" daemons, you can walk NET-SNMP-AGENT-MIB::nsCacheTable (1.3.6.1.4.1.8072.1.5.3) for more information about its internal caching of counters.
For example:
snmpwalk -v2c -cpublic localhost 1.3.6.1.4.1.8072.1.5.3 | grep .1.3.6.1.2.1.2.2
NET-SNMP-AGENT-MIB::nsCacheTimeout.1.3.6.1.2.1.2.2 = INTEGER: 3
NET-SNMP-AGENT-MIB::nsCacheStatus.1.3.6.1.2.1.2.2 = INTEGER: cached(4)
Here, you can see that my particular box is caching IF-MIB::ifTable (.1.3.6.1.2.1.2.2), which is the table that you're using, every three seconds. In my case, I would not ask for data any more often than every three seconds. NET-SNMP-AGENT-MIB::nsCacheTimeout (.1.3.6.1.4.1.8072.1.5.3.1.2) is marked as read-write, so you might be able to issue an a "set" command to change the caching duration.

Inefficient MySQL database/statements slowing PHP/JS system down

I have developed a fairly ramshackle PHP/JavaScript Reward system for my school's VLE.
The main bulk of the work is done on a transactions table which has the following fields:
Transaction_ID, Datetime, Giver_ID, Recipient_ID, Points, Category_ID, Reason
The idea is that if I, as a member of staff, give a student some Reward Points, an entry such as this is inserted into the database:
INSERT INTO `transactions` (`Transaction_ID`, `Datetime`, `Giver_ID`, `Recipient_ID`, `Points`, `Category_ID`, `Reason`) VALUES
(60332, '2012-02-22', 34985, 137426, 5, 5, 'Excellent volcano homework.');
This worked fine - but I didn't really consider just how much the system would be used. I now have over 72,000 transactions in this table.
As such, a few of my pages are starting to slow down. For instance, when staff try to allocate points, my system runs a command to get the member of staff's total point allocation and other snippets of information. This appears to be displaying rather slowly, and looking at the MySQL statement/accompanying PHP code, I think this could be much more efficient.
function getLimitsAndTotals($User_ID) {
$return["SpentTotal"] = 0;
$return["SpentWeekly"] = 0;
$sql = "SELECT *
FROM `transactions`
WHERE `Giver_ID` =$User_ID";
$res = mysql_query($sql);
if (mysql_num_rows($res) > 0) {
while ($row = mysql_fetch_assoc($res)) {
$return["SpentTotal"] += $row["Points"];
$transaction_date = strtotime ($row["Datetime"]);
if ($transaction_date > strtotime( "last sunday" )) {
$return["SpentWeekly"] += $row["Points"];
}
}
}
return $return;
}
As such, my question is twofold.
Can this specific code be optimised?
Can I employ any database techniques - full text indexing or the like - to further optimise my system?
EDIT: RE Indexing
I don't know anything about indexing, but it looks like my transactions table does actually have an index in place?
Is this the correct type of index?
Here is the code for table-creation:
CREATE TABLE IF NOT EXISTS `transactions` (
`Transaction_ID` int(9) NOT NULL auto_increment,
`Datetime` date NOT NULL,
`Giver_ID` int(9) NOT NULL,
`Recipient_ID` int(9) NOT NULL,
`Points` int(4) NOT NULL,
`Category_ID` int(3) NOT NULL,
`Reason` text NOT NULL,
PRIMARY KEY (`Transaction_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=74927 ;
Thanks in advance,
Make sure Giver_ID is indexed. Try also running the strtotime outside of your while loop as I imagine its an expensive operation to be running 72,000 times.
if (mysql_num_rows($res) > 0) {
// assign the unix timestamp of last sunday here.
$last_sunday = strtotime('last sunday');
while ($row = mysql_fetch_assoc($res)) {
$return["SpentTotal"] += $row["Points"];
$transaction_date = strtotime ($row["Datetime"]);
if ($transaction_date > $last_sunday) {
$return["SpentWeekly"] += $row["Points"];
}
}
}
Also consider running UNIX_TIMESTAMP(Datetime) AS datetime_timestamp in your SQL instead of getting it out as a string and running another expensive strtotime operation. You can then simply run:
if (mysql_num_rows($res) > 0) {
// assign the unix timestamp of last sunday here.
$last_sunday = strtotime('last sunday');
while ($row = mysql_fetch_assoc($res)) {
$return["SpentTotal"] += $row["Points"];
if ($row['datetime_timestamp'] > $last_sunday) {
$return["SpentWeekly"] += $row["Points"];
}
}
}
(if your column is of type DATE, of course!)
72k is nothing... However there could be other things here that are causing slowdowns, like your mysql configuration (how much memory you allocated to the service), and a few others (do a search for optimizing mysql).
Also look at how many INSERTS you have in a given page, that will typically slow down your page load, if you are doing an insert every action for a user, you are making a costly transaction.
Typically a SELECT is much less expensive than an INSERT, etc;
From your SQL code I can also assume you didn't optimize your queries, something like:
$sql = "SELECT *
FROM `transactions`
WHERE `Giver_ID` =$User_ID";
Should be:
$sql = "SELECT Points, DateTime FROM ..."
(not a big issue from your data, but it only requests WHAT YOU NEED, not everything which would require more memory while going through the iteration).
Also not sure how your schema is designed, but are you using indexing on Transaction_ID?
Can this specific code be optimized?
Yes.
But definitively not by "looking at the code".
Can I employ any database techniques - full text indexing or the like - to further optimise my system?
Nope, you can't.
Simply because you have no idea, what certainly is slow in your system.
This is merely a common sense, widely used in the real life by anyone, but for some reason it become completely forgotten when someone start to program a web-site.
Your question is like "my car is getting slow. How to speed it up?". WHO ON THE EARTH CAN ANSWER IT, knowing no reason of the slowness? Is it tyres? Or gasoline? Or there is no road around but an open field? Or just someone forgot to release a hand brake?
You have to determine the slowness reason first.
Before asking a question you have to measure your query runtime.
And if it's still fast, you should not blame it for the slowness of your site.
And start profiling it.
First of all you have to determine what part of the whole page makes it load slow. It could be just some Javascript loading from the third-party server.
So, start with "Net" tab in the Firebug and see the most slow part.
then proceed to it.
If it's your PHP code - use microtime(1); to measure different parts to spot the problem one.
Then come with the question.
However, if take your question not as a request for the speed optimization, but for the more sanity in SQL, there are some improvements that can be made.
Both numbers can be retrieved as a single values.
select sum(Points) as Total from FROM `transactions` WHERE Giver_ID=$User_ID
will give you tital points
as well as
$sunday = date("Y-m-d",strtotime('last sunday'));
select sum(Points) as Spent from FROM `transactions`
WHERE Giver_ID=$User_ID AND Datetime > '$sunday'
will give you weekly points.

MySQL and PHP: Atomicity and re-entrancy of a PHP code block executing two subsequent queries - how dangerous?

In MySQL I have to check whether select query has returned any records, if not I insert a record. I am afraid though that the whole if-else operation in PHP scripts is NOT as atomic as I would like, i.e. will break in some scenarios, for example if another instance of the script is called where the same record needs to be worked with:
if(select returns at least one record)
{
update record;
}
else
{
insert record;
}
I did not use transactions here, and autocommit is on. I am using MySQL 5.1 with PHP 5.3. The table is InnoDB. I would like to know if the code above is suboptimal and indeed will break. I mean the same script is re-entered by two instances and the following query sequence occurs:
instance 1 attempts to select the record, finds none, enters the block for insert query
instance 2 attempts to select the record, finds none, enters the block for insert query
instance 1 attempts to insert the record, succeeds
instance 2 attempts to insert the record, fails, aborts the script automatically
Meaning that instance 2 will abort and return an error, skipping anything following the insert query statement. I could make the error not fatal, but I don't like ignoring errors, I would much rather know if my fears are real here.
Update: What I ended up doing (is this ok for SO?)
The table in question assists in a throttling (allow/deny, really) amount of messages the application sends to each recipient. The system should not send more than X messages to a recipient Y within a period Z. The table is [conceptually] as follows:
create table throttle
(
recipient_id integer unsigned unique not null,
send_count integer unsigned not null default 1,
period_ts timestamp default current_timestamp,
primary key (recipient_id)
) engine=InnoDB;
And the block of [somewhat simplified/conceptual] PHP code that is supposed to do an atomic transaction that maintains the right data in the table, and allows/denies sending message depending on the throttle state:
function send_message_throttled($recipient_id) /// The 'Y' variable
{
query('begin');
query("select send_count, unix_timestamp(period_ts) from throttle where recipient_id = $recipient_id for update");
$r = query_result_row();
if($r)
{
if(time() >= $r[1] + 60 * 60 * 24) /// The numeric offset is the length of the period, the 'Z' variable
{/// new period
query("update throttle set send_count = 1, period_ts = current_timestamp where recipient_id = $recipient_id");
}
else
{
if($r[0] < 5) /// Amount of messages allowed per period, the 'X' variable
{
query("update throttle set send_count = send_count + 1 where recipient_id = $recipient_id");
}
else
{
trigger_error('Will not send message, throttled down.', E_USER_WARNING);
query('rollback');
return 1;
}
}
}
else
{
query("insert into throttle(recipient_id) values($recipient_id)");
}
if(failed(send_message($recipient_id)))
{
query('rollback');
return 2;
}
query('commit');
}
Well, disregarding the fact that InnoDB deadlocks occur, this is pretty good no? I am not pounding my chest or anything, but this is simply the best mix of performance/stability I can do, short of going with MyISAM and locking entire table, which I don't want to do because of more frequent updates/inserts vs selects.
It seems like you already know the answer to the question, and how to solve your problem. It is a real problem, and you can use one of the following to solve it:
SELECT ... FOR UPDATE
INSERT ... ON DUPLICATE KEY UPDATE
transactions (don't use MyIsam)
table locks
This can and might happen depending on how often this page is executed.
The safe bet would be to use transactions. The same thing you wrote would still happen, except that you could safely check for the error inside the transaction (In case the insertion involves several queries, and only the last insert breaks) allowing you to rollback the one that became invalid.
So (pseudo):
#start transaction
if (select returns at least one record)
{
update record;
}
else
{
insert record;
}
if (no constraint errors)
{
commit; //ends transaction
}
else
{
rollback; //ends transaction
}
You could lock the table as well but depending on the work you're doing, you'd have to get an exclusive lock on the entire table (you cannot SELECT ... FOR UPDATE non-existing rows, sorry) but that would also block reads from your table until you are finished.

Categories