sql query is slow - php

I have a phpmyadmin database with 1 000 000 record i need to search in. Every week there are 500 000 records added.
so, this is what I need:
location_id value date time name lat lng
3 234 2011-11-18 19:50:00 Amerongen beneden 5.40453 51.97486
4 594 2011-11-18 19:50:00 Amerongen boven 5.41194 51.97507
I do this with this query:
SELECT location_id, value, date, time, locations.name, locations.lat, locations.lng FROM
(
SELECT location_id, value, date, time from `measurements`
LEFT JOIN units ON (units.id = measurements.unit_id)
WHERE units.name='Waterhoogte'
ORDER BY measurements.date DESC, measurements.time DESC
) as last_record
LEFT JOIN locations on (locations.id = location_id)
GROUP BY location_id
which takes 30 seconds. How can I improve this? This is my structure:
CREATE TABLE IF NOT EXISTS `locations` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`code` varchar(255) NOT NULL,
`lat` varchar(10) NOT NULL,
`lng` varchar(10) NOT NULL,
`owner_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=244 ;
-- --------------------------------------------------------
--
-- Table structure for table `measurements`
--
CREATE TABLE IF NOT EXISTS `measurements` (
`id` int(11) NOT NULL auto_increment,
`date` date NOT NULL,
`time` time NOT NULL,
`value` varchar(255) NOT NULL,
`location_id` int(11) NOT NULL,
`unit_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=676801 ;
-- --------------------------------------------------------
--
-- Table structure for table `owner`
--
CREATE TABLE IF NOT EXISTS `owner` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;
-- --------------------------------------------------------
--
-- Table structure for table `units`
--
CREATE TABLE IF NOT EXISTS `units` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(255) NOT NULL,
`description` text NOT NULL,
`unit_short` varchar(255) NOT NULL,
`owner_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=44 ;
What is the limit what phpmyadmin can handle?

Create an index on units.name specifically is a good start.
You should also really rethink the amount of data you are pulling back.
Is someone really going to sift through that many records. Change your query to limit the number of records and think of a UI interface that involves a paging mechanism.

you need to put an index or unique index on units.name.

Add the following indexes:
A composite index (a covering index) on unit.name and unit.id.
A composite index of measurements.date and measurements.time.
An index on location.id

You should try creating an index on units.name as a first step. But understand that there is a tradeoff with an index - read operations will be faster, but it can slow down write operations. If you're concerned about that, or if you're affected by slow writes, then you may want to try creating the index on a smaller number of characters in units.name.
For instance, to declare an index on the first 12 characters of units.name, you'd declare the following:
CREATE INDEX first_twelve ON units (name(12));
Again, this may not be necessary if you don't notice any ill effects from just throwing an index on, but it's something to keep in mind.

SELECT measurements.location_id, measurements.value, measurements.date, measurements.time, locations.name, locations.lat, locations.lng
FROM measurements
LEFT JOIN units ON units.id = measurements.unit_id
LEFT JOIN locations ON locations.id = measurements.location_id
WHERE units.id = 4
GROUP BY measurements.location_id
ORDER BY measurements.date DESC, measurements.time DESC

Related

MySQL query with large data performance

I'm a bit new to MySQL and I would like to know if I'm going right with these tables and query:
tb_anuncio
CREATE TABLE `tb_anuncio` (
`anuncio_id` int(11) NOT NULL auto_increment,
`anuncio_titulo` varchar(120) NOT NULL,
`anuncio_valor` decimal(10,2) NOT NULL,
`anuncio_valorTipo` int(11) default NULL,
`anuncio_telefone` varchar(20) NOT NULL,
`anuncio_descricao` text,
`anuncio_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`bairro_id` int(11) NOT NULL,
`anuncio_status` int(11) default '0',
PRIMARY KEY (`anuncio_id`),
KEY `ta001_ix` (`bairro_id`)
) ENGINE=InnoDB DEFAULT charset utf8;
ALTER TABLE `tb_anuncio`
ADD CONSTRAINT `ta001_ix` FOREIGN KEY (`bairro_id`) REFERENCES `tb_bairro` (`bairro_id`) ON DELETE CASCADE ON UPDATE CASCADE;
tb_estado
CREATE TABLE `tb_estado` (
`estado_id` int(11) NOT NULL auto_increment,
`estado_nome` varchar(2) NOT NULL,
`estado_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`estado_url` varchar(2) NOT NULL,
PRIMARY KEY (`estado_id`),
UNIQUE KEY `estado_url` (`estado_url`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
tb_cidade
CREATE TABLE `tb_cidade` (
`cidade_id` int(11) NOT NULL auto_increment,
`cidade_nome` varchar(100) NOT NULL,
`cidade_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`estado_id` int(11) NOT NULL,
`cidade_url` varchar(150) NOT NULL,
PRIMARY KEY (`cidade_id`),
UNIQUE KEY `cidade_url` (`cidade_url`),
KEY `tc001_ix` (`estado_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `tb_cidade`
ADD CONSTRAINT `tc001_ix` FOREIGN KEY (`estado_id`) REFERENCES `tb_estado` (`estado_id`) ON DELETE CASCADE ON UPDATE CASCADE;
tb_bairro
CREATE TABLE `tb_bairro` (
`bairro_id` int(11) NOT NULL auto_increment,
`bairro_nome` varchar(100) NOT NULL,
`bairro_criado` timestamp NOT NULL default CURRENT_TIMESTAMP,
`cidade_id` int(11) NOT NULL,
`bairro_url` varchar(150) NOT NULL,
PRIMARY KEY (`bairro_id`),
UNIQUE KEY `bairro_url` (`bairro_url`),
KEY `tb001_ix` (`cidade_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `tb_bairro`
ADD CONSTRAINT `tb001_ix` FOREIGN KEY (`cidade_id`) REFERENCES `tb_cidade` (`cidade_id`) ON DELETE CASCADE ON UPDATE CASCADE;
Well I'm doing a query to show ads of a city/state, my query looks like:
Query
select a.anuncio_id,a.anuncio_titulo,a.anuncio_valor,a.anuncio_valorTipo,a.anuncio_descricao
from tb_anuncio a inner join(
tb_bairro b inner join(
tb_cidade c inner join
tb_estado d on d.estado_id=c.estado_id) on c.cidade_id=b.cidade_id) on b.bairro_id=a.bairro_id
where a.anuncio_status=1 and d.estado_id=:estado_id and c.cidade_id=:cidade_id and b.bairro_id=:bairro_id
group by a.anuncio_id
order by a.anuncio_id desc
limit :limit
I would like to know if I'm going right and it will work well when these tables get about 5k-10k of records.
I'm using PHP PDO MySQL.
Thanks.
Although it doesn't affect performance, the typical way to write a query would not have parentheses in the FROM clause. Also, I doubt the group by is necessary:
select a.*
from tb_anuncio a inner join
tb_bairro b
on b.bairro_id = a.bairro_id inner join
tb_cidade c
on c.cidade_id = b.cidade_id inner join
tb_estado e
on e.estado_id = c.estado_id
where a.anuncio_status = 1 and e.estado_id = :estado_id and
c.cidade_id = :cidade_id and b.bairro_id = :bairro_id
order by a.anuncio_id desc
limit :limit;
You can simplify this, because you do not need all the joins -- the join keys are in referencing tables:
select a.*
from tb_anuncio a inner join
tb_bairro b
on b.bairro_id = a.bairro_id inner join
tb_cidade c
on c.cidade_id = b.cidade_id
where a.anuncio_status = 1 and c.estado_id = :estado_id and
c.cidade_id = :cidade_id and b.bairro_id = :bairro_id
order by a.anuncio_id desc
limit :limit;
I don't know Portuguese, but seems like one estado contains many cidades, which contains many bairros. If this is correct, then the schema is wrong. Fixing the schema will lead to improving the performance.
There should be one bairro in the query, not three such items in the WHERE.
Furthermore, it is usually more practical for tb_bairro to include information about the cidade and estado, not tb_anuncio.
Once you have done those things, the GROUP BY can probably be eliminated, thereby adding more performance.
And add
INDEX(anuncio_status, bairro_id, anuncio_id)

Implicit MySQL Join on Update Statement - 0 rows affected

I'm trying to get this MySQL code to work, but it's saying 0 rows affected.
UPDATE assessments, assessment_types
SET assessments.assessment_type_id = assessment_types.id
WHERE (assessment_types.description = "Skills Assessment" AND assessments.id = 2);
Basically I have assessment_types with id and description column, and I just have the id in the assessments.assessment_type_id
I need to update the id.
I searched and couldn't find quite what I need for this.
Thanks!
Table Data:
assessment_types
id description
1 Knowledge Assessment
2 Skill Assessment
3 Personal Information
4 Natural Skills
Table Structure:
--
-- Table structure for table `assessments`
--
CREATE TABLE IF NOT EXISTS `assessments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8_bin NOT NULL,
`acronym` varchar(255) COLLATE utf8_bin NOT NULL,
`assessment_type_id` int(11) NOT NULL,
`language_id` int(11) NOT NULL,
`date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`date_updated` date NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`),
KEY `assessment_type_id` (`assessment_type_id`),
KEY `language_id` (`language_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=2385 ;
--
-- Table structure for table `assessment_types`
--
CREATE TABLE IF NOT EXISTS `assessment_types` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`description` varchar(255) CHARACTER SET latin1 NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=7 ;
You can try doing an explicit join of the two tables in your UPDATE statement:
UPDATE assessments a
INNER JOIN assessment_types at
ON a.assessment_type_id = at.id
SET a.assessment_type_id = at.id
WHERE (at.description = "Skills Assessment" AND a.id = 2);

Optimization Needed For Dual Left Join Query

I've always struggled with mysql joins but have started incorporating more but struggling to understand despite reading dozens of tutorials and mysql manual.
My situation is I have 3 tables:
/* BASICALLY A TABLE THAT HOLDS FAN RECORDS */
CREATE TABLE `fans` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(255) DEFAULT NULL,
`middle_name` varchar(255) DEFAULT NULL,
`last_name` varchar(255) DEFAULT NULL,
`email` varchar(255) DEFAULT NULL,
`join_date` datetime DEFAULT NULL,
`twitter` varchar(255) DEFAULT NULL,
`twitterCrawled` datetime DEFAULT NULL,
`twitterImage` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`)
) ENGINE=MyISAM AUTO_INCREMENT=20413 DEFAULT CHARSET=latin1;
/* A TABLE OF OUR TWITTER FOLLOWERS */
CREATE TABLE `twitterFollowers` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`screenName` varchar(25) DEFAULT NULL,
`twitterId` varchar(25) DEFAULT NULL,
`customerId` int(11) DEFAULT NULL,
`uniqueStr` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unique` (`uniqueStr`)
) ENGINE=InnoDB AUTO_INCREMENT=13426 DEFAULT CHARSET=utf8;
/* TABLE THAT SUGGESTS A LIKELY MATCH OF A TWITTER FOLLOWER BASED ON THE EMAIL / SCREEN NAME COMPARISON OF THE FAN vs OUR FOLLOWERS
IF SOMEONE (ie. a moderator) CONFIRMS OR DENIES THAT IT'S A GOOD MATCH THEY PUT A DATESTAMP IN `dismissed` */
CREATE TABLE `contentSuggestion` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`userId` int(11) DEFAULT NULL,
`fanId` int(11) DEFAULT NULL,
`twitterAccountId` int(11) DEFAULT NULL,
`contentType` varchar(50) DEFAULT NULL,
`contentString` varchar(255) DEFAULT NULL,
`added` datetime DEFAULT NULL,
`dismissed` datetime DEFAULT NULL,
`uniqueStr` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unstr` (`uniqueStr`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
What I'm trying to get is:
SELECT [fan columns]
WHERE fan screen name IS IN twitterfollowers
AND WHERE fan screen name IS NOT IN contentSuggestion (with a datestamp in dismissed)
My attempts so far:
~33 seconds
SELECT fans.id, tf.screenName as col1, tf.twitterId as col2 FROM fans
LEFT JOIN twitterFollowers tf ON tf.screenName = fans.emailUsername
LEFT JOIN contentSuggestion cs ON cs.contentString = tf.screenName WHERE dismissed IS NULL
GROUP BY(fans.id) HAVING col1 != ''
~14 seconds
SELECT id, emailUsername FROM fans WHERE emailUsername IN(SELECT DISTINCT(screenName) FROM twitterFollowers) AND emailUsername NOT IN(SELECT DISTINCT(contentString) FROM contentSuggestion WHERE dismissed IS NULL) GROUP BY (fans.id);
9.53 seconds
SELECT fans.id, tf.screenName as col1, tf.twitterId as col2 FROM fans
LEFT JOIN twitterFollowers tf ON tf.screenName = fans.emailUsername WHERE tf.uniqueStr NOT IN(SELECT uniqueStr FROM contentSuggestion WHERE dismissed IS NULL)
I hope there is a better way. I've been struggling to really use JOINS outside of a single LEFT JOIN which has already helped me speed up other queries by a significant amount.
Thanks for any help you can give me.
I would go with a variation of the second method. Instead of IN, use EXISTS. Then add the correct indexes and remove the aggregation:
SELECT f.id, f.emailUsername
FROM fans f
WHERE EXISTS (SELECT 1
FROM twitterFollowers tf
WHERE f.emailUsername = tf.screenName
) AND
NOT EXISTS (SELECT 1
FROM contentSuggestion cs
WHERE f.emailUsername = cs.contentString AND
cs.dismissed IS NULL
) ;
Then be sure you have the following indexes: twitterFollowers(screenName) and contentSuggestion(contentString, dismissed).
Some notes:
When using IN, don't use SELECT DISTINCT. I'm not 100% sure that MySQL is always smart enough to ignore the DISTINCT in the subquery (it is redundant).
Historically, EXISTS was faster than IN in MySQL. The optimizer has improved in recent versions.
For performance, you need the correct indexes.
Then be sure you have the following indexes: twitterFollowers(screenName) and contentSuggestion(contentString, dismissed).
Assuming that fan.id is unique (a very reasonable assumption), you don't need the final group by.

Optimizing queries on larger MySQL database

I'm coding a website which will store some offers (ex. job offers). In the end, it could contain more than 1M offers. Now I have problems with some inefficient SQL queries.
Scenario:
Each offer can be assigned into category (ex. IT jobs)
Each category has custom fields (ex. IT jobs can have custom field of type "price" which will represent text box accepting number (price) - in our example, let's say we have price input of expected salary)
Each offer stores meta data with values of these category custom fields
DB fields which will be used for filtering have indexes
Table category (I'm using nested sets to store categories hierarchy):
CREATE TABLE `category` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`parent_id` int(11) DEFAULT NULL,
`lft` int(11) DEFAULT NULL,
`rgt` int(11) DEFAULT NULL,
`depth` int(11) DEFAULT NULL,
`order` int(11) NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `category_parent_id_index` (`parent_id`),
KEY `category_lft_index` (`lft`),
KEY `category_rgt_index` (`rgt`)
) ENGINE=InnoDB AUTO_INCREMENT=44 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table category_field:
CREATE TABLE `category_field` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`category_id` int(10) unsigned NOT NULL,
`name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`optional` tinyint(1) NOT NULL DEFAULT '0',
`type` enum('price','number','date','color') COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `category_field_category_id_index` (`category_id`),
CONSTRAINT `category_field_category_id_foreign` FOREIGN KEY (`category_id`) REFERENCES `category` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table offer:
CREATE TABLE `offer` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`text` text COLLATE utf8_unicode_ci NOT NULL,
`category_id` int(10) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `offer_category_id_index` (`category_id`),
CONSTRAINT `offer_category_id_foreign` FOREIGN KEY (`category_id`) REFERENCES `category` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Table offer_meta:
CREATE TABLE `offer_meta` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`offer_id` int(10) unsigned NOT NULL,
`category_field_id` int(10) unsigned NOT NULL,
`price` double NOT NULL,
`number` int(11) NOT NULL,
`date` date NOT NULL,
`color` varchar(7) COLLATE utf8_unicode_ci NOT NULL,
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `offer_meta_offer_id_index` (`offer_id`),
KEY `offer_meta_category_field_id_index` (`category_field_id`),
KEY `offer_meta_price_index` (`price`),
KEY `offer_meta_number_index` (`number`),
KEY `offer_meta_date_index` (`date`),
KEY `offer_meta_color_index` (`color`),
CONSTRAINT `offer_meta_category_field_id_foreign` FOREIGN KEY (`category_field_id`) REFERENCES `category_field` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
CONSTRAINT `offer_meta_offer_id_foreign` FOREIGN KEY (`offer_id`) REFERENCES `offer` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=107769 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
When I set up some filters on my page (for example, for our salary custom field) I have to start with query which returns MIN and MAX prices in available offer_meta records (I want to show a range slider to user in front-end, so I need MIN/MAX values for this range):
select MIN(`price`) AS min, MAX(`price`) AS max from `offer_meta` where `category_field_id` = ? limit 1
I found out that these queries are most inefficient from all queries I'm making (above query takes over 500ms when offer_meta table has few thousand of records).
Other inefficient queries (offer_meta has 107k records):
Obtaining MIN and MAX values for slider to filter numbers
select MIN(`number`) AS min, MAX(`number`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining MIN and MAX prices for slider to filter by prices
select MIN(`price`) AS min, MAX(`price`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining MIN and MAX date for date range restrictions
select MIN(`date`) AS min, MAX(`date`) AS max from `offer_meta` where `category_field_id` = ? limit 1
Obtaining colors with counts to show list of colors with numbers
select `color`, count(*) as `count` from `offer_meta` where `category_field_id` = ? group by `color`
Example of full query to get offers count with multiple filter criteria (0.5 sec)
select count(*) as count from `offer` where id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct offer_id
from offer_meta om
where offer_id in (select
distinct om.offer_id
from offer_meta om
join category_field cf on om.category_field_id = cf.id
where
cf.category_id in (2,3,4,41,43,5,6,7,8,37) and
om.category_field_id = 1 and
om.number >= 1 and
om.number <= 50) and
om.category_field_id = 2 and
om.price >= 2 and
om.price <= 4545) and
om.category_field_id = 3 and
om.date >= '0000-00-00' and
om.date <= '2015-04-09') and
category_field_id = 4 and
om.color in ('#0000ff'))
The same query without aggregation function (COUNT) is few times faster (just to get IDs).
Question:
Is it possible to tweak those queries, or do you have any suggestion on how to implement my logic (offers with categories and custom fields dynamically added in admin to each category) with different table schema? I tried few more schemes, but no success.
Question 2:
Do you think this is my MySQL server problem and if I buy VPS, it will be okay?
Help to understand even better:
I was strongly inspired by WordPress schema for custom fields, so the logic is similar.
Last notes:
Also, I'm working on Laravel framework and I'm using Eloquent ORM.
Sorry for my english, I hope I made my problem clear :-)
Thank you in advance,
Patrik
It is not a MySql problem. in your scenario we found huge data collection. naturally relational databases are not efficient for some queries.(i faced a situation with oracle)
the practice for win this kind of situations is using graph databases.
it seems it is hard with the situation you are facing at the movement.
I heard that the Lucene has some kind of support for indexing large databases for selecting purpose. i dont know how exactly do it.
http://en.wikipedia.org/wiki/Lucene

MySQL + CodeIgniter + Click Tracking Unique & Total CLicks

Trying to track outbound clicks on advertisements, but im having troubles constructing the query to compile all the statistics for the user to view and track.
I have two tables, one to hold all of the advertisements, the other to track clicks and basic details on the user. ip address, timestamp, user agent.
I need to pull all of map_advertisements information along with Unique Clicks based on IP Address, and Total Clicks based on map_advertisements.id to be showin in a table with rows. 1 row per advertisement and two of its columns will be totalClicks and totalUniqueClicks
Aside from running three seperate queries for each advertisement is there a better way to go about this?
I am using MySQL5 PHP 5.3 and CodeIgniter 2.1
#example of an advertisements id
$aid = 13;
SELECT
*
count(acl.aid)
count(acl.DISTINCT(ip_address))
FROM
map_advertisements a
LEFT JOIN map_advertisements_click_log acl ON a.id = acl.aid
WHERE
a.id = $aid;
map_advertisements
-- ----------------------------
-- Table structure for `map_advertisements`
-- ----------------------------
DROP TABLE IF EXISTS `map_advertisements`;
CREATE TABLE `map_advertisements` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`youtube_id` varchar(255) NOT NULL,
`status` int(11) NOT NULL DEFAULT '1',
`timestamp` int(11) NOT NULL,
`type` enum('video','picture') NOT NULL DEFAULT 'video',
`filename` varchar(255) NOT NULL,
`url` varchar(255) NOT NULL,
`description` varchar(64) NOT NULL,
`title` varchar(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;
map_advertisements_click_log
-- ----------------------------
-- Table structure for `map_advertisements_click_log`
-- ----------------------------
DROP TABLE IF EXISTS `map_advertisements_click_log`;
CREATE TABLE `map_advertisements_click_log` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`aid` int(11) NOT NULL,
`ip_address` varchar(15) NOT NULL DEFAULT '',
`browser` varchar(255) NOT NULL,
`timestamp` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=26 DEFAULT CHARSET=latin1;
A problem seems to be in your query there is no column with the name totalClicks in your table and distinct keyword is also used incorrectly. Try this:
SELECT *, count(acl.id) as totalClicks, count(DISTINCT acl.ip_address) as uniqueClicks
FROM map_advertisements a
LEFT JOIN map_advertisements_click_log acl ON a.id = acl.aid
WHERE a.id = $aid;

Categories