How to speed up the SELECT query of the following table? - php

I have a mysql table whose create code is as follows :
CREATE TABLE image_ref (
region VARCHAR(50) NULL DEFAULT NULL,
district VARCHAR(50) NULL DEFAULT NULL,
district_name VARCHAR(100) NULL DEFAULT NULL,
lot_no VARCHAR(10) NULL DEFAULT NULL,
sp_no VARCHAR(10) NULL DEFAULT NULL,
name VARCHAR(200) NULL DEFAULT NULL,
form_no VARCHAR(50) NOT NULL DEFAULT '',
imagename VARCHAR(50) NULL DEFAULT NULL,
updated_by VARCHAR(50) NULL DEFAULT NULL,
update_log DATETIME NULL DEFAULT NULL,
ip VARCHAR(50) NULL DEFAULT NULL,
imgfetchstat VARCHAR(1) NULL DEFAULT NULL,
PRIMARY KEY (form_no)
)
COLLATE='latin1_swedish_ci'
ENGINE=MyISAM;
This table contains approximately 7,00,000 number of rows. I have an application developed using PHP. Somewhere I need to run the following query :
SELECT
min(imagename) imagename
FROM
image_ref
WHERE
district_name = '$sess_district'
AND
lot_no = '$sess_lotno'
AND
imgfetchstat = '0';
which is taking on average 1.560 sec. The form_no field only has unique values. After some job is done with the result set fetched, the imgfetchstat is required to be updated with a value 1. Now my requirement is that, whether I should use InnoDB or MyISAM? Also, the application is accessed by around 50 numbers of users in LAN. Is there any way out to run the above query little bit faster? because the imagename fetched is being used to load an image of resolution 500 x 498 into the browser and the it is taking enough time to load the image. Thanks in advance.

You can add indexes to your table (be aware that this will make the storage larger - but given your query, you should be able to use the following:
ALTER TABLE `table` ADD INDEX `product_id` (`product_id`)
For more information see http://dev.mysql.com/doc/refman/5.0/en/create-index.html
You can add an index on a single column (which makes things nice for the DB, even it it uses a few of them on a query) but if you have a secific query that needs to REALLY run fast, you can add a multi-column index which is specific to your query:
ALTER TABLE image_ref ADD INDEX `someName`
(`district_name`, `lot_no`, `imgfetchstat`)

Related

Laravel eloquent where clause issue

I have the following structure of the table:
`id` int(10) UNSIGNED NOT NULL,
`user_id` int(10) UNSIGNED NOT NULL,
`order` int(11) NOT NULL,
`category` varchar(50) COLLATE utf8_unicode_ci NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL
I have five confirm records in table where i am querying table like this :
$recommended = App\Recommend::where('category', '=', 'editorpicks');
But the result comes empty, Let me paste the column name and value against it straight from DB.
column name : category
value : editorpicks.
Why its not working.
I have tried it in tinker also.
App\Recommend::where('category', 'editorpicks')->get();
Note, you don't need to use "=" in where, if no conditional is provided, the where clause will default to equals. get() grabs the collection. You could also do first() to grab first single record, last(), find($id), etc.
It's also good practice to namespace the model as well. So add use App\Recommend to top of controller (I'll assume this already makes sense) and then just use $recommended = Recommend::where(.... Keep things clean.

Insert/Update Issue (Same record repeated, no more can be entered)

I'm a bit baffled here, and my code is a bit too lengthy to post it all, but I'll provide a logical list of the operations in hand and can provide the code where needed if it helps, but the problem is weird and I don't think it's to do with the code. It's a standard upload form for an article website.
On first load, assign random article_ID
Check if article_ID already exists. If it does, repeat step 1
On save (submit), insert article_ID (the only required value [for testing purposes], the rest can be NULL)
Any of other fields entered are checked for content, and if there is some entered update where article_ID = $article_ID.
It's quite a simple system, make a template of an article with an article_ID where all the other fields can be NULL. The user adds the content piece by piece saving along the way until all the fields are entered so the article can be published.
However, the first time I got it working, an article_ID was assigned and the template inserted. Now I can't insert any other records, and more oddly still if I delete that record and then create a new form instance with a new article_ID and INSERT, it just keeps adding the same record with the old article_ID, even though the form has no session variables with that old article_ID still stored.
Has anyone had something similar?
Database Structure
`article_ID` int(10) NOT NULL,
`author_ID` int(5) NOT NULL,
`article_title` varchar(50) NOT NULL,
`article_subtitleshort` varchar(120) default NULL,
`article_subtitlelong` varchar(180) default NULL,
`article_category` varchar(20) NOT NULL,
`article_featureimage` varchar(15) default NULL,
`article_icon` varchar(15) default NULL,
`article_publishdate` varchar(12) default NULL,
`article_lastsavedate` varchar(12) NOT NULL,
`article_status` varchar(11) NOT NULL default 'unpublished',
`article_firstimage` varchar(15) default NULL,
`article_intro` varchar(600) default NULL,
`article_firsttext` blob,
`article_secondimage` varchar(15) default NULL,
`article_secondtext` blob,
`article_thirdimage` varchar(15) default NULL,
`article_youtube` varchar(50) default NULL,
`article_gallery` varchar(10) default NULL,
PRIMARY KEY (`article_ID`)
Relevant Code http://snippi.com/s/57j8i7e
I suspect the INSERT is failing because you are inserting a INTs as VARCHARs. Change it as follow - Remove the single quotes passing it through as VARCHAR incorrectly.
Also verify that your SELECT statement will return record as you are using the same logic when you pass the $article_ID i.e. remove the single quotes. Also not necessary to check for the random value.
Please make use of AUTO_INCREMENT for article_ID instead of randomly generating the value.
mysql_query("INSERT INTO articles (author_ID) VALUES ($author_ID)");
article_ID will be populated with a unique value

Comparison time for 2 large MySQL database table

I have imported 2 .csv file that I wanted to compare into MySQL table. now i want to compare both of them using join.
However, whenever I include both table in my queries, i get no response from phpMyAdmin ( sometimes it shows 'max execution time exceeded).
The record size in both db tables is 73k max. I dont think thats huge on data. Even a simple query like
SELECT *
FROM abc456, xyz456
seems to hang. I did an explain and I got this below. I dont know what to take from this.
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE abc456 ALL NULL NULL NULL NULL 73017
1 SIMPLE xyz456 ALL NULL NULL NULL NULL 73403 Using join buffer
can someone please help?
UPDATE: added the structure of the table with composite keys. There are around 100000+ records that would be inserted in this table.
CREATE TABLE IF NOT EXISTS `abc456` (
`Col1` varchar(4) DEFAULT NULL,
`Col2` varchar(12) DEFAULT NULL,
`Col3` varchar(9) DEFAULT NULL,
`Col4` varchar(3) DEFAULT NULL,
`Col5` varchar(3) DEFAULT NULL,
`Col6` varchar(40) DEFAULT NULL,
`Col7` varchar(200) DEFAULT NULL,
`Col8` varchar(40) DEFAULT NULL,
`Col9` varchar(40) DEFAULT NULL,
`Col10` varchar(40) DEFAULT NULL,
`Col11` varchar(40) DEFAULT NULL,
`Col12` varchar(40) DEFAULT NULL,
`Col13` varchar(40) DEFAULT NULL,
`Col14` varchar(20) DEFAULT NULL,
KEY `Col1` (`Col1`,`Col2`,`Col3`,`Col4`,`Col5`,`Col6`,`Col7`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
It looks like you are doing a pure catesian join in your query.
Shouldn't you be joining the tables on certain fields? If you do that and the query still takes a long time to execute, you should put appropriate indexes to speed up the query.
The reason that it is taking so long is that it is trying to join every single row of the first table to every single row of the second table.
You need a join condition, some way of identifying which rows should be matched up:
SELECT * FROM abc456, xyz456 WHERE abc456.id = xyz456.id
Add indexes on joining columns. That should help with performance.
Use MySQL Workbench or MySQL Client (console) for long queries. phpmyadmin is not designed to display queries that return 100k rows :)
If you REALLY have to use phpmyadmin and you need to run long queries you can use Firefox extension that prevents phpmyadmin timeout: phpMyAdmin Timeout Preventer (direct link!)
There is a direct link, because i couldnt find english description.

Database Design/Structure -- Collecting Data over time

currently I am in the process of structuring a database for a site I am creating. However, I have come across a problem. I want to log the amount of times a user has logged in each day, and then be able to keep track of that info over large periods of time such as a 8 months, a year, 2 years, etc.
The only way I can think of right now, is to just have a column for each day of the year/automatically create a column each day. This idea however, just seems plain stupid to me. I'm sure there has to be a better way to do this, I just can't think of one.
Any suggestions?
Thanks,
Rob
Create a separate table where you store user_id, and datetime the user logs in.
Averytime the user logs in, you insert a new record on this table.
Example:
CREATE TABLE user_activity (
userid varchar(50),
log_in_datetime datetime
);
Here is a login table I use for one of my sites. The datetime can either be logged as a datetime or as a timestamp. If you use datetime make sure to consider the timezone of your mysql server.
There is plenty of stuff to track. Then you can just query it later. Each of these column names should be self explanatory with a google search.
CREATE TABLE `t_login` (
`id_login` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`id_user` INT(10) UNSIGNED NOT NULL DEFAULT '0',
`id_visit` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT 'fk to t_visit',
`id_org` INT(10) UNSIGNED NOT NULL DEFAULT '0',
`when_attempt` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`uname_attempt` VARCHAR(100) NOT NULL DEFAULT '' COMMENT 'attempted username' COLLATE 'latin1_swedish_ci',
`valid_uname` TINYINT(1) UNSIGNED NOT NULL DEFAULT '0' COMMENT 'valid username',
`valid_uname_pword` TINYINT(1) UNSIGNED NOT NULL DEFAULT '0' COMMENT 'valid username and valid password together',
`pw_hash_attempt` BINARY(32) NOT NULL DEFAULT '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0',
`remote_ip` CHAR(20) NOT NULL DEFAULT '' COLLATE 'latin1_swedish_ci',
`user_agent` VARCHAR(2000) NOT NULL DEFAULT '' COLLATE 'latin1_swedish_ci',
PRIMARY KEY (`id_login`),
INDEX `when_attempt` (`when_attempt`),
INDEX `rempte_ip` (`remote_ip`),
INDEX `valid_user` (`valid_uname`),
INDEX `valid_password` (`valid_uname_pword`),
INDEX `username` (`uname_attempt`),
INDEX `id_ten` (`id_org`),
INDEX `id_user` (`id_user`),
INDEX `id_visit` (`id_visit`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
AUTO_INCREMENT=429;

PHP model (MySQL) design problem

I'm looking for the most efficient solution to the problem I'm running into. I'm designing a shift calendar for our employees. This is the table I'm working with so far:
CREATE TABLE IF NOT EXISTS `Shift` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`accountId` smallint(6) unsigned NOT NULL,
`grpId` smallint(6) unsigned NOT NULL,
`locationId` smallint(6) unsigned NOT NULL,
`unitId` smallint(6) unsigned NOT NULL,
`shiftTypeId` smallint(6) unsigned NOT NULL,
`startDate` date NOT NULL,
`endDate` date NOT NULL,
`needFlt` bit(1) NOT NULL DEFAULT b'1',
`needBillet` bit(1) NOT NULL DEFAULT b'1',
`fltArr` varchar(10) NOT NULL,
`fltDep` varchar(10) NOT NULL,
`fltArrMade` bit(1) NOT NULL DEFAULT b'0',
`fltDepMade` bit(1) NOT NULL DEFAULT b'0',
`billetArrMade` bit(1) NOT NULL DEFAULT b'0',
`billetDepMade` bit(1) NOT NULL DEFAULT b'0',
`FacilityId` smallint(6) unsigned NOT NULL,
`FacilityWingId` mediumint(9) unsigned NOT NULL,
`FacilityRoomId` int(11) unsigned NOT NULL,
`comment` varchar(255) NOT NULL,
`creation` datetime NOT NULL,
`lastUpdate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`lastUpdateBy` mediumint(9) unsigned NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Now here's the hitch - I'd like to be able to display on the calendar (in a different color) whether or not a timesheet has been received for a certain day.
My first thought was to create a separate table and list separate entries by day for each employee, T/F. But the amount of data returned from a separate query, for each employee, for the whole month would surely be huge and inefficient.
Second thought was to somehow put the information in this Shift table, with delimiters - then exploding it with PHP. Silly idea... but I guess that's why im here. Any thoughts?
Thanks for your help!
As hinted previously and I think you realized yourself, serializing the data into a single column or using some other form of delimited string is a path to computational inefficiencies in the packing and unpacking and serious maintenance grief for the future.
Heaps better is to get the data structure right, i.e. a properly normalized table. After all, MySQL is rather good at dealing with this some of structure.
You don't need to pull back every line for every staff member. If you're pull them out together, you could "group" your resultset by employee and date, and even make that a potentially useful result by (say) pulling the summary of hours. A zero result or null result would show no timesheet, and the total hours may be helpful in some other way.
If you were pulling them out an employee and a date at a time then your application structure probably needs looking at, but you could use the SQL LIMIT keyword to pull at most one record and then test to see if any came back.

Categories