In my application whenever a user upload a wallpaper,i need to crop that wallpaper into
3 different sizes and store all those paths(3 paths for cropped images and 1 for original upload wallpaper) into my database.
I also need to store the tinyurl of the original wallpaper(one which is uploaded by user).
While solving the above described problem i come up with following table structure.
CREATE TABLE `wallpapermaster` (
`wallpaperid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`userid` bigint(20) NOT NULL,
`wallpaperloc` varchar(100) NOT NULL,
`wallpapertitle` varchar(50) NOT NULL,
`wallpaperstatus` tinyint(4) DEFAULT '0' COMMENT '0-Waiting,1-approved,2-disapproved',
`tinyurl` varchar(40) NOT NULL
) ENGINE=MyISAM
wallpaperloc is a comma separated field consisting of original wallpaper location plus locations of all cropped instances.
I know using comma separated field considered to be a bad design in the world of relational database,So Would you like to suggest some other neat and efficient ways?
Use a 1:n relationship between the wallpapermaster and a location table.
Something like this:
CREATE TABLE wallpapermaster (
wallpaperid int unsigned NOT NULL AUTO_INCREMENT,
userid bigint NOT NULL,
wallpaperloc varchar(100) NOT NULL,
wallpapertitle varchar(50) NOT NULL,
wallpaperstatus tinyint DEFAULT '0' COMMENT '0-Waiting,1-approved,2-disapproved',
primary key (wallpaperid)
) ENGINE=InnoDB;
CREATE TABLE wallpaperlocation (
wallpaperid int unsigned NOT NULL,
location varchar(100) NOT NULL,
tinyurl varchar(40),
constraint fk_loc_wp
foreign key (wallpaperid)
references wallpapermaster (wallpaperid),
primary key (wallpaperid, location)
) ENGINE=InnoDB;
The primary key in wallpaperlocation ensures that the same location cannot be inserted twice.
Note that int(10) does not define any datatype constraints. It is merely a hint for client application to indicate how many digits the number has.
Usually you use a fixed location (maybe out of a config), fix extension (usually jpg) and a special filename formats like [name]-1024x768.jpg. This way you only the the name
In my opinion using ; or , in siple application is quite good solution even in relational databases.
You should propably think about amout of splitted images count. If there will be less than 5 wallpapers I would not take overhead complex solutions.
It's easy to maintain in database and application. You will use string splitting/joining methods
No need to adding extra additional tables which you will use join to retreive values.
Using simple varchar rather xml is better because you don't have to rely on application database access engine. When you use ORM or JDBC you have extra additional work to do to handle more complex datatypes.
In more complex systems I would make XML column.
While thumbnails are generated automatically from the single uploaded file, you don't need to store paths to cropped/resized files at all.
Instead you can just use normalized filenames for thumbnails and then find them in filesystem - something that KingCrunch suggested: photo1.jpg, photo1-medium.jpg etc.
Anyway, my 2cc: for avoiding traversing your image library (and created thumbnails) with some harvesters, it's good idea to encrypt name of each thumbnail even with just MD5 + some secret key programmatically, so only your program which knows the key can create proper path to the thumbnails basing on the original name/path. For other clients, naming sequence will be just random.
CREATE TABLE `wallpapermaster` (
`wallpaperid` int(10) unsigned NOT NULL AUTO_INCREMENT,
`userid` bigint(20) NOT NULL,
`wallpapertitle` varchar(50) NOT NULL,
`wallpaperstatus` tinyint(4) DEFAULT '0' COMMENT '0-Waiting,1-approved,2-disapproved',
`tinyurl` varchar(40) NOT NULL
) ENGINE=MyISAM
Create a new table which will create relationship with "wallpapermaster" table
create wallpapermaster_mapper(
`id` unsigned NOT NULL AUTO_INCREMENT,
`wallpapermaster_id` int(10) //this will be foreign key with id of wallpapermaster table
`wallpaper_path1` varchar(100) NOT NULL,
`wallpaper_path2` varchar(100) NOT NULL,
`wallpaper_path3` varchar(100) NOT NULL,
)
Related
I am curious to know what is best naming convention in terms of performance for mysql table names and column names. I am designing a new database for my project.
What I have used so far is use descriptive table/column names which sometimes seems long but I think it helps in easily understanding the use/function of a table.
For example see below DDL:
CREATE TABLE `product_configuration` (
`product_configuration_id` int(11) NOT NULL AUTO_INCREMENT,
`product_id` int(20) NOT NULL,
`product_size_id` int(20) NOT NULL,
`product_color_id` int(20) NOT NULL,
`price` float NOT NULL,
`image` varchar(255) DEFAULT NULL,
`locked` tinyint(1) DEFAULT '0' COMMENT '1=locked, 0 =unlocked. if locked then this row can''t be deleted/updated',
`active` tinyint(1) DEFAULT '1' COMMENT '1=active, 0=inactive and wont display on frontend',
PRIMARY KEY (`product_configuration_id`)
) ENGINE=InnoDB AUTO_INCREMENT=2342 DEFAULT CHARSET=latin1
And another DDL in which I use the primary key from above DDL as foreign key :
CREATE TABLE `product` (
`product_id` int(11) NOT NULL AUTO_INCREMENT,
`product_name` varchar(255) NOT NULL,
`product_description` varchar(255) NOT NULL,
`product_image` varchar(255) NOT NULL,
`price` float NOT NULL,
`active` tinyint(1) NOT NULL COMMENT '1=active, 0=inactive',
`date_added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`product_type_id` int(11) DEFAULT NULL,
`date_modified` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`product_id`)
) ENGINE=InnoDB AUTO_INCREMENT=21 DEFAULT CHARSET=latin1
Basically I use singular table names with table name as prefix in most of the column names inside that table and I keep the same name and datatype for primary and foreign keys so that I can easily know which foreign key relates to which primary key/tables.
But I wonder, do using long table/column names have performance impact when database size grows. Like instead of just using "id" as primary key I am using long "product_configuration_id".
Also if I name tables/columns in uppercase and lowercase mixed like
"ProductConfiguration"
for table name and
"ProductConfigurationId"
for column name will that have any performance impact or linux/windows environment compatibility issue.
Long table and column names do not have (any significant) performance impact. All tables and column references are turned into internal locators during the compilation phase of the query. So pretty much the only impact is having to query a longer query string. The parsing part of query compilation is usually ignored from a performance perspective.
The following is opinion-based. As a general rule, I follow these conventions for naming:
Table names are in the plural, because they contain multiple entities.
Each table (almost always) has an auto-incremented numeric primary key, which is the singular form of the table followed by Id.
This column is the first column defined, so I can use order by 1 desc to get the most recent rows added to the table.
The table name is not (generally) part of the column name. I always (try to) use table aliases, so including the table name would be redundant.
Foreign key references use the same column name as the primary key they are referring to, when possible, so I can use using for joins.
I admit that these are "opinion-based", so the real answer to your question is in the first paragraph.
I want to create a table like below:
id| timestamp | neighbour1_id | neighbour1_email | neighbour2_id | neighbour2_email
and so on upto max neighbour 20.
I have two questions:
Should I create columns statically or is there a way to create columns dynamically using php based on the count of json Array?
In either case, how would I refer to the columns dynamically and assign value to them based on jsonArray?
My jsonArray would look something like:
{id:123, email_id:abc, neighbours: [{neighbour1_id:234, neighbour1_email: bcd}, {neighbour2_id:345, neighbour2_email:dsf}, {}, {}...]}
Please advice. Thanks.
It looks like you need to rethink your database structure a bit. To me it looks like you need a single users (or whatever they are) table:
CREATE TABLE `users` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`cretaed_at` timestamp NOT NULL,
PRIMARY KEY (`id`)
);
And another table that defines relations between those users:
CREATE TABLE `neighbors` (
`parent` int(11) unsigned NOT NULL,
`child` int(11) unsigned NOT NULL,
PRIMARY KEY (`parent`,`child`)
);
Now you can add as many neighbors to each user as you want. Fetching them is as easy as:
SELECT * FROM `users`
LEFT JOIN `neighbors` ON `users`.`id` = `neighbors`.`child`
WHERE `neighbors`.`parent` = ?
Where that question mark would become the id of the user from which you are fetching the neighbors, preferably by using a prepared statement.
If it is all JSON you will be working with, and querying isn't much of an issue, you could consider working with a noSql database or document store (like redis or mongoDb), but that is an entirely different story.
Just repeating a bunch of columns x times is definitely not the way to go. Vertical size (# rows) of tables in relational databases is no big issue, they are designed for that. Horizontal size (# columns) however is something to be careful with, as it may make your db uanessacry large, and decrease performance.
Just consider what you would if you want to find a user that has a neighbor with an email address [x]. You would have to repeat your where statement 20 times for each possible email column. And that is just one example...
well, the answer i was working on while pevara was posting theirs faster is almost the same...
CREATE TABLE `neighbours` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`neighbour_email` char(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
CREATE TABLE `neighbour_email_collections` (
`id` int(10) unsigned NOT NULL,
`email_id` char(64) NOT NULL,
`neighbour_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`,`neighbour_id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
insert into neighbours values (234, "bcd");
insert into neighbours values (345, "dsf");
insert into neighbour_email_collections values(123, "abc", 234);
insert into neighbour_email_collections values(123, "abc", 345);
select *
from neighbours
left join neighbour_email_collections
on neighbour_email_collections.neighbour_id=neighbours.id
where neighbour_email_collections.id=123;
I have a small(100-ish rows, 5 columns) table which is displayed in full for a control panel feature. When using IntelliJ to test development, it responds to the initial request, but never completes executing, and thus never serves any content. If I deploy the PHP files to my local web server, it serves the same content with no hesitation at all. Sometimes, when I load parts of the control panel that use no database access, it loads it just fine(albeit slow). I've upped the max memory allowed for requests in my cli/php.ini, and also increased the memory available to IntelliJ. My idea64.vmoptions is as follows:
-Xms128m
-Xmx3G
-XX:MaxPermSize=750m
-XX:ReservedCodeCacheSize=200m
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-Djsse.enableSNIExtension=false
-XX:+UseCodeCacheFlushing
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-Dawt.useSystemAAFontSettings=lcd
If I dump the table, it loads the page again, so I assume the problem is related to how much memory IntelliJ allows php to use, but I'm quite stumped as to what to look for. The only special thing about the table, as far as I know, is that it uses a very large primary key column. Table structure is as follows:
CREATE TABLE IF NOT EXISTS `links` (
`url` VARCHAR(767) NOT NULL,
`link_group` INT(10) UNSIGNED NOT NULL,
`isActive` TINYINT(1) NOT NULL DEFAULT '1',
`hammer` TINYINT(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`url`),
KEY `group` (`link_group`)
)
ENGINE =InnoDB
DEFAULT CHARSET =utf8mb4,
ROW_FORMAT = COMPRESSED;
The row format is compressed to allow for said large primary keys. How should I proceed to if not solve it, find the cause?
I tried following Peter's suggestions, to no avail. I'm beginning to think this may just be IntelliJ not properly being able to serve PHP in my case. New table structure is as follows:
CREATE TABLE IF NOT EXISTS `links` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`url` varchar(767) NOT NULL,
`link_group` int(10) unsigned NOT NULL,
`isActive` tinyint(1) NOT NULL DEFAULT '1',
`hammer` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `url` (`url`),
KEY `group` (`link_group`),
FULLTEXT KEY `url_2` (`url`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 ROW_FORMAT=COMPRESSED AUTO_INCREMENT=1 ;
Just to be clear, the MySQL performance doesn't seem bad. SELECT * FROM links executes in 0.0005 seconds.
You might want to recreate that table. Your table definition might be causing the unpredicatable behaviour.
Try using the TEXT data type for the url field. Also, using that as a PRIMARY key is not funny. Use an id field for the primary key and then, add a unique index to the url field (if so desired).
I have ~38 columns for a table.
ID, name, and the other 36 are bit-sized settings for the user.
The 36 other columns are grouped into 6 "settings", e.g. Setting1_on, Setting1_colored, etc.
Is this the best way to do this?
Thanks.
If it must be in one table and they're all toggle type settings like yes/no, true/false, etc... use TINYINT to save space.
I'd recommend creating a separate table 'settings' with 36 records one for each option. Then create a linking table to the user table with a value column to record the user settings. This creates a many-to-many link for the user settings. It also makes it easy to add a new setting--just add a new row to the 'settings' table. Here is an example schema. I use varchar for the value of the setting to allow for later setting which might not be bits, but feel free to use TINYINT if size is an issue. This solution will not use as much space as the one table with the danger of a large sparsely populated set of columns.
CREATE TABLE `user` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
`address` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting_user` (
`user_id` int(11) NOT NULL DEFAULT '0',
`setting_id` int(11) unsigned NOT NULL,
`value` varchar(32) DEFAULT NULL,
PRIMARY KEY (`user_id`,`setting_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
All depends on how you want to access them. If you want to (or must) just select one of them, then go with the #Ray solution. If they can be functionally grouped (really, not some pretend grouping for all those that start with F) ie. you'll always need number of them for a function and reading and writing them doesn't make sense as an individual flag, then perhaps storing them as ints and using logic operaoprs on them might be a goer.
Saying that, unless you are doing a lot of read and writes to the db during a session, bundling them up into ints gives you very little performance wise, it would save some space on the DB, if all the options had to exist. If doesn't exist = false, it could be a toss up.
So all things being unequal, I'd go with Mr Ray.
MySQL has a SET type that could be useful here. Everything would fit into a single SET, but six SETs might make more sense.
http://dev.mysql.com/doc/refman/5.5/en/set.html
I have the following table townResources in which I store every resource value for every town ID. I am a bit reserved about performance impact for a large amount of users. I am thinking for moving the balance for resources to the towns table, and the general value of an resource to store it in a .php file.
Here you have the townresources table:
CREATE TABLE IF NOT EXISTS `townresources` (
`townResourcesId` int(10) NOT NULL AUTO_INCREMENT,
`userId` int(10) NOT NULL,
`resourceId` int(10) NOT NULL,
`townId` int(10) NOT NULL,
`balance` decimal(8,2) NOT NULL,
`resourceRate` decimal(6,2) NOT NULL,
`lastUpdate` datetime NOT NULL,
PRIMARY KEY (`resourceId`,`townId`,`townResourcesId`,`userId`),
KEY `townResources_userId_users_userId` (`userId`),
KEY `townResources_townId_towns_townId` (`townId`),
KEY `townResourcesId` (`townResourcesId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='Stores Town Resources' AUTO_INCREMENT=9 ;
What is the best option in my case?
Your best option is to test first. How much users & towns do you want to support? Triple that.. create the test data and see whether the performance is within bounds.
If you run into trouble with performance you should look into caching the data with redis or memcache.