Server uptime script - php

I'm trying to do something for my website, to be specific, I'm trying to do a script for uptime.
I have the reader, a script which read the percents from a table.

This is not a very efficient way to use a Relational Database. I would, instead, suggest (at least with the SQL side), the following:
CREATE TABLE `servers` (
`srv_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
-- Additional Fields Omitted here.
PRIMARY KEY (`srv_id`)
)
ENGINE = InnoDB;
CREATE TABLE `stats` (
`stat_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
`srv_id` INTEGER UNSIGNED NOT NULL,
`date` TIMESTAMP NOT NULL,
`uptime` INTEGER UNSIGNED NOT NULL,
PRIMARY KEY (`stat_id`)
)
ENGINE = InnoDB;
This way you can record as many measures as you like, against as many servers as you like, and then use SQL to either delete old content or keep the old content and use WHERE arguments to filter the data used in the interface displaying these stats.

$day = int(strftime("%j") % 5);
$key = 'day' . $day;
if($row[$key] == 0)
{
if($checkls && $checkgs) //if server is online update the percent
mysql_query("UPDATE s_stats SET ${key}=".($stats_row[$key] + 0.5)." WHERE srv_id=".$r[id]." ") or die(mysql_error()); //every 7.2 minutes add 0.5 percent
else echo "error day $day";
}

Related

In a 1-1 relationship, why is my insert inserting two records in two tables?

I'm having trouble, as title says, when I INSERT a record in a table that has got a 1-1 relationship with another.
First things first, the SQL code that generates the tables:
DROP TABLE IF EXISTS Facebook_Info;
DROP TABLE IF EXISTS Conversations;
CREATE TABLE IF NOT EXISTS Conversations(
c_id INT AUTO_INCREMENT NOT NULL,
c_start TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
channel ENUM('desktop', 'facebook'),
u_name VARCHAR(20) DEFAULT NULL,
u_email VARCHAR(50) DEFAULT NULL,
PRIMARY KEY(c_id)
);
CREATE TABLE IF NOT EXISTS Facebook_Info (
c_id INT AUTO_INCREMENT NOT NULL,
f_id INT(12) NOT NULL,
PRIMARY KEY(c_id),
FOREIGN KEY(c_id) REFERENCES Conversations(c_id)
);
I assure you this code works: I tested it. I hope this is the best way to provide a 1-1 relationship between Conversations and Facebook_Info.
In any case, now I can introduce you my nightmare: I'm trying to insert a new record in Conversations via PHP (procedural style).
public function create_new_id_conv($channel = 1) {
$w_ch = '';
if ($channel == 2) {
$w_ch = 'facebook';
} else {
$w_ch = 'desktop';
}
$query = "INSERT INTO Conversations (c_id, c_start, channel) VALUES (NULL, CURRENT_TIMESTAMP,'$w_ch')";
$conn = mysqli_connect("localhost", Wrapper::DB_AGENT, Wrapper::DB_PSW, Wrapper::DB_NAME);
$res = mysqli_query($conn, $query);
$id_conv= mysqli_insert_id($conn);
mysqli_free_result($res);
return $id_conv;
}
The Wrapper:: * variables are all set well, in fact, an INSERT operation is done, but not only one! I'm having this situation after I call this function:
This is the content of Conversations table:
And here's the content of Facebook_Info:
What's happening?
I searched and searched...
Then I started to think about what I'm getting here: 2147483647. What represents this number? What's that? Seems like a big number!
And what if my script and my queries were correct but the mistake is the skeleton of my tables?
I must register a 14 digit integer, that is too large for the INT type.
So using BIGINT to store the f_id field made all correct and working!
Hope my mistake helps someone!

Writing a better loop

I'm creating a script that will search the database and look for customers that are in the Realtors latitude and longitude boundary range. If the customer lat and long coordinates is within the range of the realtor's lat and long boundaries then this script will email only the Realtor in that customers range. I'm using a CRON job to run the php script. I got the script to email each person that is in range of the Realtors but when a third Realtor is entered into the database the email goes to the third Realtor even though the lat and long is out of range.
How do I write a better loop where each row gets checked if the client is in range of that Realtor and only email that Realtor only? Thanks.
Here is my SQL code.
CREATE TABLE `realtors` (
`rid` int(11) NOT NULL AUTO_INCREMENT,
`rEmail` varchar(255) NOT NULL,
`rZipCode` int(10) NOT NULL,
`rDist` int(11) NOT NULL,
`rlatitude` numeric(30,15) NOT NULL,
`rlongitude` numeric(30,15) NOT NULL,
PRIMARY KEY (`rid`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
CREATE TABLE `customers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`eMail` varchar(255) NOT NULL,
`zipCode` int(11) NOT NULL,
`clatitude` numeric(30,15) NOT NULL,
`clongitude` numeric(30,15) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Here is my php code.
<?php
use geocodeloc\GeoLocation as GeoLocation;
require_once 'geocodeloc/GeoLocation.php';
//require_once 'phpmailer/PHPMailerAutoload.php';
$db = getDB();
//database prep for customers
$cust = $db->prepare("SELECT fullName, eMail, clatitude, clongitude FROM customers ORDER BY id DESC");
$cust->bindParam("fullName", $fullName,PDO::PARAM_STR);
$cust->bindParam("zipCode", $zipCode,PDO::PARAM_STR);
$cust->bindParam("eMail", $email,PDO::PARAM_STR);
$cust->bindParam("clatitude", $clatitude,PDO::PARAM_STR);
$cust->bindParam("clongitude", $clongitude,PDO::PARAM_STR);
$cust->execute();
$cust->rowCount();
//database prep for realtors
$realt = $db->prepare("SELECT rEmail, rDist, rlatitude, rlongitude FROM realtors ORDER BY rid DESC");
$realt->bindParam("rZipCode", $rZipCode,PDO::PARAM_STR);
$realt->bindParam("rEmail", $rEmail,PDO::PARAM_STR);
$realt->bindParam("rDist", $rDist,PDO::PARAM_STR);
$realt->bindParam("rlatitude", $rlatitude,PDO::PARAM_STR);
$realt->bindParam("rlongitude", $rlongitude,PDO::PARAM_STR);
$realt->execute();
$realt->rowCount();
$i = -1;
while ($realtor_row = $realt ->fetch(PDO::FETCH_ASSOC) AND $customers_row = $cust ->fetch(PDO::FETCH_ASSOC)) {
$i++;
$realtLatLong = GeoLocation::fromDegrees( $realtor_row['rlatitude'], $realtor_row['rlongitude']);
$coordinates = $realtLatLong->boundingCoordinates($realtor_row['rDist'], 'miles');
//look to see if customers latitude and longitude is within range of the realtors lat and long boundaries.
if($customers_row['clatitude'] && $customers_row['clongitude'] <= $coordinates){
//email the realtor
// the message
$msgBody = "This is a test";
// use wordwrap() if lines are longer than 70 characters
$msgBody = wordwrap($msgBody,70);
$Mailto = $realtor_row['rEmail'];
$FromName = $customers_row['fullName'];
// send email
mail($Mailto, $FromName , $msgBody);
}else{
//send to debug log
}
};
?>
Looping through the entire result set and doing the calculations is going to kill your database very quickly. Looping through one table and then looping through another to do a distance comparison is going to kill your database even faster. Luckily this is a re invention of the wheel. Mysql has built in functionality for this by way of ST_Distance
SELECT * FROM realtors INNER JOIN customers WHERE ST_within(customers.loc, realtors.loc) < 10; /* location in degrees */
Where one degree is approximately 111 kilometer. You whould need to change your table as follows
CREATE TABLE `realtors` (
`rid` int(11) NOT NULL AUTO_INCREMENT,
`rEmail` varchar(255) NOT NULL,
`rZipCode` int(10) NOT NULL,
`rDist` int(11) NOT NULL,
`loc` point NOT NULL,
PRIMARY KEY (`rid`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
CREATE TABLE `customers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`eMail` varchar(255) NOT NULL,
`zipCode` int(11) NOT NULL,
`loc` POINT not null,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
of course this requires mysql 5.7
Using a spatial data type means that you can use an index for spatial looksup. In an RDBS if a table contains N rows, having an indes means you do not need to check through all those N number of rows to find a result. Thus using spatial data here + an index you can avoid the NxM time complexity you might have with lat,lng in separate columns.
No matter how fast you can make your code, the complexity will still be NxM.
First thing you should do is to create a relationship between Customer and Realtor, i.e. a table with Customer.id and Realtor.id. Take a hit the first time you populate this table (no need to change your code). After that, you just need to create a relationship everytime a Customer or a Realtor got added.
When it's time to send your email, you just need to look at the relationship table.

Speed up MySQL Query + PHP

I want to speed up this code. It is the query that takes time. If I change the amount of rows returned from 100 to 10, it takes almost the same amount of time (about 2 seconds). The GETs are based on user sort/search input. How do I improve the speed of this? This item table has about 2374744 rows, and the bot table about 20 rows.
$bot = " && user_items_new.bot_id != '0'";
if ($_GET['bot'] != 0) {
$bot = " && user_items_new.bot_id='".$_GET['bot']."'";
}
$name = '';
if (strlen($_GET['name']) > 0) {
$name = " && user_items_new.name LIKE '%".$_GET['name']."%'";
}
$min = '';
if (strlen($_GET['min']) > 0) {
$min = " && steam_price >= '".$_GET['min']."'";
}
$max = '';
if (strlen($_GET['max']) > 0) {
$max = " && steam_price <= '".$_GET['max']."'";
}
$order = '';
if ($_GET['order'] == 'price_desc') {
$order = "ORDER BY steam_price DESC, user_items_new.name ASC";
} elseif ($_GET['order'] == 'price_asc') {
$order = "ORDER BY steam_price ASC, user_items_new.name ASC";
} elseif ($_GET['order'] == 'name_desc') {
$order = "ORDER BY user_items_new.name DESC";
} else {
$order = "ORDER BY user_items_new.name ASC";
}
$limit = $_GET['start'];
$limit .= ', 100';
$i = 0;
$sql = mysql_query("SELECT user_item_id, user_items_new.bot_id AS item_bot_id, sticker, `key`, `case`, exterior, stattrak, image, user_items_new.name AS item_name, steam_price, color, bots_new.bot_id, bots_new.name AS bot_name, withdraw_enabled FROM user_items_new LEFT JOIN bots_new ON user_items_new.bot_id=bots_new.bot_id WHERE steam_price > '0.1' && deposit_start='0' && deposited='0' && user_id='0' && withdraw_enabled='1' ".$bot." ".$name." ".$min." ".$max." ".$order." LIMIT ".$limit)or die(mysql_error());
while ($item = mysql_fetch_assoc($sql)) {
//...
}
The item table looks like this (dumped from phpMyAdmin):
CREATE TABLE IF NOT EXISTS `user_items_new` (
`user_item_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`bot_id` int(11) NOT NULL,
`item_original_id` varchar(22) NOT NULL,
`item_real_id` varchar(22) NOT NULL,
`class_id` varchar(22) NOT NULL,
`weapon_id` int(11) NOT NULL,
`name` text NOT NULL,
`image` text NOT NULL,
`case` int(11) NOT NULL,
`key` int(11) NOT NULL,
`sticker` int(11) NOT NULL,
`capsule` int(11) NOT NULL,
`holo` int(11) NOT NULL,
`name_tag` int(11) NOT NULL,
`access_pass` int(11) NOT NULL,
`stattrak` int(11) NOT NULL,
`color` varchar(32) NOT NULL,
`exterior` text NOT NULL,
`steam_price` double NOT NULL,
`deposited` int(11) NOT NULL,
`deposit_start` int(11) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=5219079 DEFAULT CHARSET=utf8;
ALTER TABLE `user_items_new`
ADD PRIMARY KEY (`user_item_id`), ADD KEY `user_id` (`user_id`), ADD KEY `bot_id` (`bot_id`);
ALTER TABLE `user_items_new`
MODIFY `user_item_id` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=5219079;
And then the bot table:
CREATE TABLE IF NOT EXISTS `bots_new` (
`bot_id` int(11) NOT NULL,
`name` varchar(64) NOT NULL,
`username` varchar(64) NOT NULL,
`password` varchar(64) NOT NULL,
`deposit_enabled` int(11) NOT NULL,
`withdraw_enabled` int(11) NOT NULL,
`ident` varchar(32) NOT NULL
) ENGINE=MyISAM AUTO_INCREMENT=19 DEFAULT CHARSET=utf8;
ALTER TABLE `bots_new`
ADD PRIMARY KEY (`bot_id`);
Edit (adding prettyprinted SELECT)
SELECT user_item_id, user_items_new.bot_id AS item_bot_id, sticker,
key, case, exterior, stattrak, image, user_items_new.name AS item_name,
steam_price, color, bots_new.bot_id, bots_new.name AS bot_name,
withdraw_enabled
FROM user_items_new
LEFT JOIN bots_new ON user_items_new.bot_id=bots_new.bot_id
WHERE user_items_new.bot_id != '0' && deposit_start='0' && deposited='0' && user_id='0' && withdraw_enabled='1'
ORDER BY user_items_new.name ASC
LIMIT , 100
How to speed this up...
Firstly, add a composite index on the columns that have predicates with equality comparisons first, e.g.
... ON user_items_new (user_id,deposited,deposit_start)
This will be of benefit if the predicates are filtering out a large number of rows. For example, if less than 10% of the rows satisfy the condition user_id = 0.
As an aside, the predicate withdraw_enabled='1' will negate the "outerness" of the LEFT JOIN. The result from the query will be equivalent if the keyword LEFT is omitted.
Another issue is that the ORDER BY will cause a "Using filesort" operation to sort the rows. The entire set will need to be sorted, before the LIMIT clause is applied. So we don't expect LIMIT 10 to be any faster than LIMIT 1000, apart from the additional time for the client to transfer an additional 990 rows. (The bit about sorting the entire set isn't entirely true; in some cases MySQL can abort the sort operation after identifying the first "limit" number of rows. But MySQL will still need to go through the entire set to get those first rows.)
It's possible that adding the column(s) in the ORDER BY clause to the index, following the columns with equality predicates. These would need to appear immediately following the columns referenced in the equality predicates. It may also be necessary to specify those same columns in the ORDER BY clause.
Assuming the current query includes:
...
WHERE ...
&& deposit_start='0' && u.deposited='0' && u.user_id='0' ...
...
ORDER BY steam_price ASC, user_items_new.name ASC
This index may be appropriate:
... ON user_items_new (user_id,deposited,deposit_start,steam_price,name)
The output from EXPLAIN will show whether that index is used for the query or not. Beyond the equality comparisons of the first three columns, MySQL can use a range scan operation on the index to satisfy the steam_price > predicate.
There's also the issue of the InnoDB buffer pool; how much memory is allocated to holding index and data pages in memory, to avoid storage i/o.
To avoid lookups to data pages in the underlying table, you can consider creating a covering index for the query. A covering index includes all of the columns referenced from the table, so the query can be satisfied entirely from the index. The EXPLAIN output will show "Using index" in the Extra column if the query is using a covering index. (But there are limits to the number of columns and the total row size in the index. This would most benefit the performance of the query when the table rows are large, and the size of the columns in the index is a small subset of the total table row.
With a table of that size, one of the simplest tricks you can use for optimizing the query is to add indexes on the fields you use in the where clause. This allows the parser to have stuff presorted for the queries you use most often.
For example, you should see significant gains by doing:
ALTER TABLE user_items_new ADD INDEX (steam_price);
The data and data type go a long way in determining the actual gains made. Adding indexes on all fields will result in going backwards on the efficiency of the query. So more is not necessarily better.
Your query is slow because your query against the user_items_new table requires inspecting 1.2 million rows. While you have indexes for user_item_id, user_id, and bot_id, those can only filter your results so far.
You will want to add indexes on some of your data columns. Which indexes you will want to add (and whether any of them are compound or not) is going to depend on the actual contents of the table and would be difficult to recommend without more information.
You will want to add indexes based on which columns where distinct values reduce the data that must be looked at significantly; an index on withdraw_enabled, for example, is not likely to gain much unless very few rows have withdraw_enabled == 1. An index on steam_price will be beneficial if very few of your rows have a steam_price >= 0.1.

MySQL INSERT IGNORE Adding 1 to Non-Indexed column

I'm building a small report in a PHP while loop.
The query I'm running inside the while() loop is this:
INSERT IGNORE INTO `tbl_reporting` SET datesubmitted = '2015-05-26', submissiontype = 'email', outcome = 0, totalcount = totalcount+1
I'm expecting the totalcount column to increment every time the query is run.
But the number stays at 1.
The UNIQUE index composes the first 3 columns.
Here's the Table Schema:
CREATE TABLE `tbl_reporting` (
`datesubmitted` date NOT NULL,
`submissiontype` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL,
`outcome` tinyint(1) unsigned NOT NULL DEFAULT '0',
`totalcount` mediumint(5) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `datesubmitted` (`datesubmitted`,`submissiontype`,`outcome`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
When I modify the query into a regular UPDATE statement:
UPDATE `tbl_reporting` SET totalcount = totalcount+1 WHERE datesubmitted = '2015-05-26' AND submissiontype = 'email' AND outcome = 1
...it works.
Does INSERT IGNORE not allow adding numbers? Or is my original query malformed?
I'd like to use the INSERT IGNORE, otherwise I'll have to query for the original record first, then insert, then eventually update.
Think of what you're doing:
INSERT .... totalcount=totalcount+1
To calculate totalcount+1, the DB has to retrieve the current value of totalcount... which doesn't exist yet, because you're CREATING a new record, and there is NO existing data to retrieve the "old" value from.
e.g. you're trying eat your cake before you ever went to the store to buy the ingredients, let alone mix/bake them.

Updating a column with 1 updates it with 2

I'm a bit confused. I have a website with a sort of user-profiles. When a visitor hits a user-page I want to update the number of views by a date and userid. But, no matter what i do, the number of views is updated with 2 instead of one. I've created an query-output for all queries which are executed during a page-request. The update-query is correct and there's only 1 update-query executed during the page-request.
This is my data-structure:
CREATE TABLE `ProfileView` (
`Id` int(8) NOT NULL auto_increment,
`UserId` int(8) NOT NULL,
`Date` date NOT NULL,
`Views` int(8) NOT NULL,
PRIMARY KEY (`Id`),
KEY `UserId` (`UserId`,`Date`)
) ENGINE=MyISAM AUTO_INCREMENT=10 DEFAULT CHARSET=latin1;
No matter what I do, the column 'Views' is always updated by 2 instead of 1.
The logic being executed (called from a controller, controller gets called from the view. Decorator is basically a sealed stdClass providing strict coding guidance because misspelled properties result in a PropertyDoesntExistException):
Workflow:
# user-details.php
$oControllerProfileView = new Controller_ProfileView();
$oControllerProfileView->Replace($iUserId);
---
# Controller.ProfileView.php
public function Replace($iUserId) {
// validation
Model_ProfileView::Replace($iUserId, date('Y-m-d'));
}
---
# Model.ProfileView.php
static public function Replace($iUserId, $sDate) {
$oData = MySQL::SelectOne("
SELECT Views
FROM ProfileView
WHERE UserId = ".$iUserId."
AND Date = '".$sDate."'");
if(is_a($oData, 'Decorator')) {
MySQL::Query("
UPDATE ProfileView
SET `Views` = ".($oData->Views + 1)."
WHERE UserId = ".$iUserId."
AND Date = '".$sDate."'");
} else {
MySQL::Query("
INSERT INTO ProfileView
VALUES (
NULL,
".$iUserId.",
'".$sDate."',
1
)");
}
}

Categories