MySQL query with time and random selecting - php

I'd like to know how to how to make a query that does this:
I have a table like this:
CREATE TABLE `sendingServers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` text NOT NULL,
`address` text NOT NULL,
`token` text NOT NULL,
`lastPoll` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
And I'd like to get the following:
Select all servers where lastPoll is less then X seconds ago
Then select a random entry from the return value
Is this possible ? How do I achieve that ?

You can use something like this:
select * from `sendingServers`
where `lastPoll` > DATE_SUB(NOW(), INTERVAL 30 SECOND)
order by rand() limit 1

Related

Get big result set from mysql

I've a big table with about 20 millions of rows and every day it grows up and I've a form which get a query from this table. Unfortunately query returns hundreds of thousands of rows.
Query is based on Time, and I need all records to classify them by 'clid' base on some rules.So I need all records to do some process on them to make a result table.
This is my table :
CREATE TABLE IF NOT EXISTS `cdr` (
`gid` bigint(20) NOT NULL AUTO_INCREMENT,
`prefix` varchar(20) NOT NULL DEFAULT '',
`id` bigint(20) NOT NULL,
`start` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`clid` varchar(80) NOT NULL DEFAULT '',
`duration` int(11) NOT NULL DEFAULT '0',
`service` varchar(20) NOT NULL DEFAULT '',
PRIMARY KEY (`gid`),
UNIQUE KEY `id` (`id`,`prefix`),
KEY `start` (`start`),
KEY `clid` (`clid`),
KEY `service` (`service`)
) ENGINE=InnoDB DEFAULT CHARSET=utf-8 ;
and this is my query :
SELECT * FROM `cdr`
WHERE
service = 'test' AND
`start` >= '2014-02-09 00:00:00' AND
`start` < '2014-02-10 00:00:00' AND
`duration` >= 10
Date period could be various from 1 hour to maybe 60 day or even more.(like :
DATE(start) BETWEEN '2013-02-02 00:00:00' AND '2014-02-03 00:00:00'
)
The result set has about 150,000 rows for every day. When i try to get result for bigger period or even one day database crashes.
Does anybody have any idea ?
I don't know how to prevent it from crashing, but one thing that I did with my large tables was partition them by date.
Here, I partition the rows by date, twice a month. As long as your query uses the partitioned column, it will only search the partitions containing the key. It will not do a full table scan.
CREATE TABLE `identity` (
`Reference` int(9) unsigned NOT NULL AUTO_INCREMENT,
...
`Reg_Date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`Reference`),
KEY `Reg_Date` (`Reg_Date`)
) ENGINE=InnoDB AUTO_INCREMENT=28424336 DEFAULT CHARSET=latin1
PARTITION BY RANGE COLUMNS (Reg_Date) (
PARTITION p20140201 VALUES LESS THAN ('2014-02-01'),
PARTITION p20140214 VALUES LESS THAN ('2014-02-14'),
PARTITION p20140301 VALUES LESS THAN ('2014-03-01'),
PARTITION p20140315 VALUES LESS THAN ('2014-03-15'),
PARTITION p20140715 VALUES LESS THAN (MAXVALUE)
);
So basically, you just do a dump of the table, create it with partitions and then import the data into it.

Optimize a count SQL query on a big table

I have a table with over 10 thousand registers right now, and they start to run so slow.
I have the following code:
COUNT
$SqlCount = "SELECT tabnews.New_Id
FROM tabnew WHERE New_Id <> '' AND New_Status = 1";
$QueryCount = mysql_query($SqlCount, $Conn) or die(mysql_error($Conn));
$NumCount = mysql_num_rows($QueryCount);
$recordCount = $NumCount;
PAGINATION
if (!$id) $p = 1;
else $p = $id;
$pageSize = 16;
$itemIni = ($pageSize*$p)-$pageSize;
$totalPage = ceil($recordCount/$pageSize);
SHOW
$Sql52 = "SELECT New_Id, New_Nome, New_Data, New_Imagem FROM tabnews WHERE New_Status = 1 ORDER BY New_Id DESC LIMIT $itemIni, $pageSize ";
$Query52 = mysql_query($Sql52, $Conn);
while($Rs52 = mysql_fetch_array($Query52)){
// ECHO RESULTS
}
MY DATABASE:
CREATE TABLE IF NOT EXISTS `tabnews` (
`New_Id` int(11) NOT NULL AUTO_INCREMENT,
`Franquia_Id` text NOT NULL,
`New_Slide` int(2) NOT NULL,
`Categoria_Id` int(2) NOT NULL,
`New_Nome` varchar(255) NOT NULL,
`New_Data` date NOT NULL,
`New_Imagem` varchar(75) NOT NULL,
`New_Status` int(11) NOT NULL,
PRIMARY KEY (`New_Id`),
KEY `idx_1` (`New_Status`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=10490 ;
Any ideas on how I can make this run faster?
I have a dedicated server running CENTOS.
This:
New_Id <> ''
What does this do? It casts every single one of your INT primary key to string to compare it to a string. Why would you compare it to a string? It cannot be '' by definition, omit that New_Id <> '' from your WHERE clause.
20 seconds is very weird for such a little table.
I have a very similar table with almost 4 million rows and your both SQL statements takes less than 0.002 sec.
CREATE TABLE IF NOT EXISTS `tasks` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`status` varchar(10) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'open',
`method` varchar(10) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'GET',
`url` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`params` text COLLATE utf8_unicode_ci,
`response` text COLLATE utf8_unicode_ci,
`executed_by` varchar(50) COLLATE utf8_unicode_ci DEFAULT '',
`execute_at` datetime DEFAULT NULL,
`created` datetime NOT NULL,
`modified` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `status` (`status`),
KEY `modified` (`modified`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3839270 ;
-
SELECT COUNT(id) FROM tasks WHERE status='done';
---> Query took 0.0008 sec
-
SELECT id, status, method, url FROM tasks WHERE status='done' ORDER BY id DESC LIMIT 200, 100;
---> Query took 0.0011 sec
Observations:
You should use SELECT COUNT(New_Id)...
New_id <> '' doesn't make sense. New_id can't be empty or NULL
Set the length of New_Status to something that match the values you store there
Try turning off logging: SET GLOBAL general_log = 'OFF';
Update your server packages (specially MySQL)
Is it a dedicated server only for the database?
Is the server running other things? (run 'top' and 'uptime' to check it status)

mysql - php - Need to get only the latest record from a self referencing data

Hi,
I need some help with SQL. Attached is the image of my table.
If you see rootmessageid column there are 4 records of 99s. All these 4 makes one complete conversation.
Similarly the 2 records of 119 makes an other conversation.
116, 117, 118 are single message conversation.
Now I need to get all the records where msgfrom = 7 or msgto = 7 (this was the easy part)
Now the complicated bit. I want the only the latest record (based on datetimecreated) from each conversation.
Following the script to create this table.
CREATE TABLE IF NOT EXISTS `selectioncommunication` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`comactionid` int(11) NOT NULL,
`usercomment` varchar(2048) DEFAULT NULL,
`msgfrom` int(11) NOT NULL,
`msgto` int(11) NOT NULL,
`projectid` int(11) NOT NULL,
`parentmessageid` int(11) NOT NULL,
`datetimecreated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`rootmessageid` int(11) NOT NULL,
`isread` tinyint(1) NOT NULL DEFAULT '0',
`isclosed` tinyint(1) DEFAULT '0',
`relative_date_time` datetime DEFAULT NULL,
`consultant_response` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=121 );
You want the groupwise maximum:
SELECT s.*
FROM selectioncommunication s NATURAL JOIN (
SELECT parentmessageid, MAX(datetimecreated) datetimecreated
FROM selectioncommunication
WHERE msgfrom = 7 OR msgto = 7
GROUP BY parentmessageid
) t
WHERE s.msgfrom = 7 OR s.msgto = 7
use ORDER BY datetime ASC/DESC
this will sort your results in order then add LIMIT 1 to the end of your query to only get the first record in your list.
Here is your SQl Fiddle without Join
SELECT *
FROM selectioncommunication k
WHERE datetimecreated = (SELECT
MAX(datetimecreated)
FROM selectioncommunication s
WHERE s.rootmessageid = k.rootmessageid
GROUP BY s.rootmessageid
ORDER BY s.id)

sql query is slow

I have a phpmyadmin database with 1 000 000 record i need to search in. Every week there are 500 000 records added.
so, this is what I need:
location_id value date time name lat lng
3 234 2011-11-18 19:50:00 Amerongen beneden 5.40453 51.97486
4 594 2011-11-18 19:50:00 Amerongen boven 5.41194 51.97507
I do this with this query:
SELECT location_id, value, date, time, locations.name, locations.lat, locations.lng FROM
(
SELECT location_id, value, date, time from `measurements`
LEFT JOIN units ON (units.id = measurements.unit_id)
WHERE units.name='Waterhoogte'
ORDER BY measurements.date DESC, measurements.time DESC
) as last_record
LEFT JOIN locations on (locations.id = location_id)
GROUP BY location_id
which takes 30 seconds. How can I improve this? This is my structure:
CREATE TABLE IF NOT EXISTS `locations` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`code` varchar(255) NOT NULL,
`lat` varchar(10) NOT NULL,
`lng` varchar(10) NOT NULL,
`owner_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=244 ;
-- --------------------------------------------------------
--
-- Table structure for table `measurements`
--
CREATE TABLE IF NOT EXISTS `measurements` (
`id` int(11) NOT NULL auto_increment,
`date` date NOT NULL,
`time` time NOT NULL,
`value` varchar(255) NOT NULL,
`location_id` int(11) NOT NULL,
`unit_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=676801 ;
-- --------------------------------------------------------
--
-- Table structure for table `owner`
--
CREATE TABLE IF NOT EXISTS `owner` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;
-- --------------------------------------------------------
--
-- Table structure for table `units`
--
CREATE TABLE IF NOT EXISTS `units` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`description` text NOT NULL,
`unit_short` varchar(255) NOT NULL,
`owner_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=44 ;
What is the limit what phpmyadmin can handle?
Create an index on units.name specifically is a good start.
You should also really rethink the amount of data you are pulling back.
Is someone really going to sift through that many records. Change your query to limit the number of records and think of a UI interface that involves a paging mechanism.
you need to put an index or unique index on units.name.
Add the following indexes:
A composite index (a covering index) on unit.name and unit.id.
A composite index of measurements.date and measurements.time.
An index on location.id
You should try creating an index on units.name as a first step. But understand that there is a tradeoff with an index - read operations will be faster, but it can slow down write operations. If you're concerned about that, or if you're affected by slow writes, then you may want to try creating the index on a smaller number of characters in units.name.
For instance, to declare an index on the first 12 characters of units.name, you'd declare the following:
CREATE INDEX first_twelve ON units (name(12));
Again, this may not be necessary if you don't notice any ill effects from just throwing an index on, but it's something to keep in mind.
SELECT measurements.location_id, measurements.value, measurements.date, measurements.time, locations.name, locations.lat, locations.lng
FROM measurements
LEFT JOIN units ON units.id = measurements.unit_id
LEFT JOIN locations ON locations.id = measurements.location_id
WHERE units.id = 4
GROUP BY measurements.location_id
ORDER BY measurements.date DESC, measurements.time DESC

PHP MYSQL Insert/Update

I have a simple table as below.
CREATE TABLE `stats` (
`id` int(11) NOT NULL auto_increment,
`zones` varchar(100) default NULL,
`date` date default NULL,
`hits` int(100) default NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=7 DEFAULT CHARSET=latin1;
So just storing simple hits counter per zone per day.
But I just want to increment the hits value for the same day.
I have tried the MYSQL DUPLICATE KEY UPDATE but this wont work as I may have many zones on different dates so I cant make them unique or dates.
So the only way I can think is first to do a query to see if a date exists then do a simple if() for insert/update
Is their a better way of doing such a task as there maybe be many 1000's hits per day.
Hope this makes sense :-).
And thanks if you can advise.
Declare the tuple (zone, date) as unique in your CREATE statement. This will make INSERT ... ON DUPLICATE UPDATE work as expected:
CREATE TABLE `stats` (
`id` int(11) NOT NULL auto_increment,
`zone` varchar(100) default NULL,
`date` date default NULL,
`hits` int(100) default NULL,
PRIMARY KEY (`id`),
UNIQUE (`zone`, `date`)
) ENGINE=MyISAM AUTO_INCREMENT=7 DEFAULT CHARSET=latin1;
INSERT INTO stats (zone, date, hits) values ('zone1', 'date1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1;
$result = mysql_query("SELECT id FROM stats WHERE zone=$zone AND date=$today LIMIT 1");
if(mysql_num_rows($result)) {
$id = mysql_result($result,0);
mysql_query("UPDATE stats SET hits=hits+1 WHERE id=$id");
} else {
mysql_query("INSERT INTO stats (zone, date, hits) VALUES ($zone, $today, 1)");
}
Something like that, if I've interpreted you correctly... that's completely untested. You can figure out what the variables are.

Categories