So here is my problem. I know there is a fair bit of literature on this site about this type of issue but I am confused about how several of these issues intertwine for my problem. First I have an array of row data that needs to be updated or inserted based on a remote id value within that array, in this case value_c. This array corresponds to a row instance from table foo. Basically, if a record with a matching value_c exists in the database then update that record otherwise insert the new record payload. The data structure of the array corresponds to the row schema of table foo in our db. This is the schema (obfuscated for safety):
CREATE TABLE IF NOT EXISTS `foos` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`value_a` varchar(13) DEFAULT NULL,
`value_b` int(11) DEFAULT NULL,
`value_c` int(11) DEFAULT NULL,
.
.
.
.
`value_x` enum('enum_a','enum_b','enum_c','enum_d') DEFAULT NULL,
`value_y` text,
`value_z` enum('daily','monthly','weekly') DEFAULT NULL,
`value_aa` tinyint(4) NOT NULL,
`value_bb` varchar(1000) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=829762 ;
There is a lot of data, and the plan in as followed. String-ify this data send it to a stored procedure that would then update or insert as needed. Something like the following (note that this will be happening in a model within codeigniter):
public function update_or_insert($event_array_payload)
{
// string-ify the data
$mod_payload = implode('<delimiter>', $event_array_payload)
//deal with NULLs in array
$mod_payload = $this->deal_with_nulls($mod_payload);
$this->stored_procedure_lib->update_or_insert_payload($mod_payload);
}
// then elsewhere in the stored procedure library
public function update_or_insert_payload($foo)
{
$this->_CI->db->query('CALL update_or_insert_foo(\'$foo\')');
}
My issue is as followed. A single string value is passed into the stored procedure. Then it needs to be parsed apart and places into either a single update or a single insert statement. I could create a variable for each column of the foo table and a loop to populate each variable and update/insert that way, but the foo table is very likely to be extended, and I do not want to create bugs further down the line. Is there a way to dynamically place the parsed apart contents of a string representation of an array into a single update or insert statement. I'm not sure if that is even possible, but I feel like a work around I do not know about might exist. Thank you for the help.
It is not a definitive answer but it would be an option to try.
If you want to avoid sending many parameters the procedure, you can create a table called foos_tmp with the same structure foos but with one field aditional id_foos_tmp (pk and autoincrement) and enter the array in table foos_tmp. Then the procedure you send only the id_foos_tmp of the table foos_tmp generated and internally the procedure to do a SELECT foos_tmp table and get the data that you had before in the array.
I hope it helps somewhat.
Greetings.
Related
So I'm not sure exactly how to title this question. I am creating a database and need to store some default SEO info as follows.
default page title
default keywords
default page description
header code
footer code
There will never be more than 1 entry per field. So the question is do I create a table in the database with columns for each of these data types with the understanding that there will only ever be 1 row of data?
OR do I create a table that has a name column for each of the fields and then a column for the data (text)? With this option I can see that I wont be able to set the data type for each field, instead each would have to be tinytext or varchar.
Here are the 2 database table structures I'm contemplating.
CREATE TABLE `cms_seo` (
`id` int(2) NOT NULL,
`name` VARCHAR(100) NOT NULL,
`data` tinytext NOT NULL,
PRIMARY KEY (`id`)
)
INSERT INTO `cms_seo`
(`id`, `name`, `data`)
VALUES
(1, 'Website Keywords', ''),
(2, 'Default Page Title', ''),
(3, 'Default Page Description', ''),
(4, 'Header Code', ''),
(5, 'Footer Code', '');
OR
CREATE TABLE `cms_seo`(
`id` INT(1) NOT NULL AUTO_INCREMENT,
`default_page_title` VARCHAR(500) NOT NULL,
`default_keywords` VARCHAR(1000) NOT NULL,
`default_page_description` TINYTEXT NOT NULL,
`header_code` TINYTEXT NOT NULL,
`footer_code` TINYTEXT NOT NULL,
PRIMARY KEY (`id`)
)
INSERT INTO `cms_seo`
(`id`,
`default_page_title`,
`default_keywords`,
`default_page_description`,
`header_code`,
`footer_code`)
VALUES
(NULL, '', '', '', '', '');
Would there be any alternative to storing this data? Such as in a text file? The data will need to be editable through the cms.
It's a common pattern to store the type of data you describe in a "key/value" format like your design #1. Some advantages include:
If your application needs a default for some new property, you can add it simply by INSERTing a new row.
There's a practical limit to the width of a row in MySQL. But you can add as many rows as you want.
If the list of defaults gets long, you can query for just one row. If you have an index, looking up a single row is more efficient.
Advantages of design #2:
You can use MySQL data types to constrain the length or format of each column individually. Design #1 requires you to use a data type that can store any possible value.
You store the property names as metadata, instead of as strings. Using metadata for property names is more "correct" relational database design.
I have posted many times in the past discouraging people to use the "key/value" design for data. But it's a legitimate use of that design when you just have one set of values, like the defaults in your case.
Another option, as you have mentioned, would be to store the data in file, instead of a database. See http://php.net/manual/en/function.parse-ini-file.php
Another option is to store the default values in a PHP file Just declare a hash array of them. One advantage of this technique is that a PHP file is converted to bytecode by PHP and then cached.
But since you say you have to be able to edit values through your application, you might find it easier to store it in a database.
This answer is really something in between a glorified comment and a full answer. I prefer option #2, because at some point in the future perhaps you might have the need for more than just one placeholder record. In addition, if you go with the second option you can make use of MySQL's relational capabilities, such as joining by column name.
There's nothing wrong with a table with only one row. (Relationally, its only candidate key is {}, but SQL doesn't let you express that directly.)
Relationally, ie if you want to ask arbitrary questions about individual keywords or collections of keywords, then you should store, query & manipulate this "row" as two tables:
CREATE TABLE `cms_seo`(
`id` INT(1) NOT NULL AUTO_INCREMENT,
`default_page_title` VARCHAR(500) NOT NULL,
`default_page_description` TINYTEXT NOT NULL,
`header_code` TINYTEXT NOT NULL,
`footer_code` TINYTEXT NOT NULL,
PRIMARY KEY (`id`)
)
CREATE TABLE `cms_seo_keyword`(
`seo_id` INT(1) NOT NULL,
`default_keywords` VARCHAR(1000) NOT NULL,
PRIMARY KEY (`seo_id`, `default_keywords`),
FOREIGN KEY (`seo_id`) REFERENCES `cms_seo` (`seo_id`)
)
You can declare a view for cms_seo in terms of these. Ideally you would program as much as possible using this database.
PS Design 1 is an EAV design. Reasearch EAV's problems. Essentially, it means you are using a DBMS to implement & use a (bug-filled feature-poor) program whose desired functionality is... a DBMS. You should only use such a design if you demonstrate that a straightforward relational design using DML & DDL gives insufficient performance but an EAV design does. (And that includes the present value/expense of EAV disadvantages.)
Problem:
I have the following table in MySQL.
For this example lets say that there is (and always will be) only one person in the world called "Tom" "Bell". So (name, surname) is the PRIMARY KEY in my table. Every person has his salary, an unsigned integer.
CREATE TABLE IF NOT EXISTS `user` (
`name` varchar(64) NOT NULL DEFAULT 'Default_name',
`surname` varchar(64) NOT NULL DEFAULT 'Default_surname',
`salary` int(10) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `id` (`name`,`surname`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Whenever I insert a row using a PHP script I want my function to return the primary key of the inserted row (an array key=>value).
From PHP context I do not know what the primary key of table 'user' consists of and I do not always need to set all primary key values (example 2, very stupid, but possible).
I can add another argument to my insert function (for example I could pass the table name, in this case "user").
If this matters, I am using PDO (php data objects) to connect with my MySQL database.
Example 1:
$db->insert('INSERT INTO `user` (`name`,`surname`,`salary`) VALUES ('Tom','Bell','40');');
should return an array:
$arr = ['name' => 'Tom', 'surname' => 'Bell'];
Example 2:
$db->insert('INSERT INTO `user` (`name`,`salary`) VALUES ('Nelly','40');');
should return an array:
$arr = ['name' => 'Nelly', 'surname' => 'Default_surname'];
Disclaimer & other information:
I know this is not a well-designed table, I could use an auto_increment id column to make it much easier and probably more efficient as well. This is just an example to show the problem without having to explain my project structure.
Without loss of generality: Using functions like "getLastInsertId()" or "##identity" will return 0, I guess the reason is because the table does not have an auto_increment column.
What have I tried? Nothing (other than things stated in point 2 (which I was certain it wouldn't work) and searching for a solution).
There aren't "nice" ways around this problem. One of the reasons for having an auto_increment is to avoid having problems like you described.
Now, to avoid my answer to be one of those that take into account only half the picture - I do realize that sometimes you inherit a project or you simply screw up during initial stages and you have to fix things quickly.
To reflect on your example - your PK is a natural PK, not a surrogate one like auto_increment is. Due to that fact it's implied that you always know the PK.
In your example #1 - you inserted Tom Bell - that means you knew the PK was Tom Bell since you instructed MySQL to insert it. Therefore, since you knew what the PK was even before insert, you know how to return it.
In your example #2 you specified only a part of the PK. However, your table definition says thtat default values for both name and surname are Default_surname. That means, if you omit either part of the PK, you know it'll assume the default value. That also means you already know before insertion what the PK is.
Since you have to use a natural PK instead of a surrogate, the responsibility of "knowing" it shifts to you instead of RDBMS. There is no other way of performing this action. The problem becomes even more complex if you allow for a default value to become null. That would let you insert more than 1 Tom with null as surname, and the index constraint wouldn't apply (null is not equal to null, therefore (tom, null) is not equal to (tom, null) and insert can proceed).
Long story short is that you need a surrogate PK or the auto_increment. It does everything you require based on the description. If you can't use it then you have a huge problem at your hands that might not be solvable.
I have a mysql database and some php that allows you to create an entry in the database, update an entry, and view the entries as a web page or xml. What I want to do is add a function to move an entry in the database up or down by one row, or, send to the top of the database or bottom.
I've seen some online comments about doing this type of thing that suggested doing a dynamic sort when displaying the page, but I'm looking for a persistent resort. I've seen one approach suggested that would be to have a separate "sort" field in the database that is agnostic of the actual database sort key, but I'm not sure why that would be better than actually re-ordering the database
Here is a dump of the table structure:
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
--
-- Database: `hlnManager`
--
-- --------------------------------------------------------
--
-- Table structure for table `hln_stations`
--
CREATE TABLE IF NOT EXISTS `hln_stations` (
`id` int(6) NOT NULL auto_increment,
`station_title` varchar(60) NOT NULL default '',
`station_display_name` varchar(60) NOT NULL default '',
`station_subtitle` varchar(60) NOT NULL default '',
`station_detailed_description` text NOT NULL,
`stream_url_or_playlist_url` text NOT NULL,
`link_type` varchar(25) NOT NULL default '',
`small_thumbnail_graphic_url` text NOT NULL,
`large_thumbnail_graphic_url` text NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=21 ;
Not sure what you mean by "Reordering" the database... SQL Databases typically do not make any guarantees on what order (if any) they will return records in short of an ORDER BY clause.
You need a "SortOrder" type column. I suggest you make it an int with a unique key.
You need a way to update this "SortOrder" column via the UI
Easy to program, easy to use: Implement a simple drag+drop interface in HTML using jQuery or whatever javascript library works for you. In the on-complete method (or in response to a save button), trigger an ajax call which will simply send an array of ids in the correct order. On the database side, loop over it and update the SortOrder accordingly, starting at 1, then 2, etc...
Harder to program, hard to use: Implement a classical move-up and move-down buttons. When clicked, send the id and action (eg, up, down) to the server. There are several strategies to handle this update, but I will outline a couple:
Assuming the user clicked "move up", you can swap IDs with the previous record.
Find the previous record: SELECT id FROM hln_stations WHERE SortOrder < (SELECT SortOrder FROM hln_stations WHERE id = ...) ORDER BY SortOrder DESC LIMIT 1
Run two update statements, swapping the SortOrder. Reverse for moving down. Add special code to detect top or bottom.
etc...
There are other ways, but for a web interface, I suggest you do Drag+Drop, as the users will love it.
Databases are not "stored" in any order. They are stored in whatever way is convenient for the storage subsystem. If you delete a record, a new record may use the space of the old record "inserting" itself into the database. While it may seem like the database always returns records in a particular order, you can't rely on it.
The ONLY way to assure a sort order is to have a field to sort on.
Dont know where you can find example to find example. but you can look the following code it is very basic:
Let id is your primary key and there is a column sort_order. You want to store primary keys in the following order: 5,4,3,6,8,7,9,10,2,1.
then you store them in an array:
$my_sorted = array(5,4,3,6,8,7,9,10,2,1);
then you update your table:
update `mytable` set `sort_order` = (index of $my_sorted) WHERE `id`=(array value of that index).
Instead of doing many queries you can do it in one query like:
$query = "UPDATE `mytable` SET sort_order= CASE id ";
foreach($my_sorted as $key=>$val){
$query .= " WHEN '$val' THEN $key ";
}
$query .="END";
Then you run $query in mysql.
After updating table you can select from mytable with order by sort_order asc or desc.
hope this helps.
"re-ordering" the database would require two records swapping primary keys, or most likely they would need to have all data except the primary keys be swapped. this would most likely be undesireable, since the primary key should be the one way you can consistently refer to a particular record.
The separate order field would be the way to go. Just make sure that you put an index on the order field so that things stay speedy.
There is no way to find out in which order databases stores data. When we query to database, we specify the field name that we want our data to be sorted by.
In your case, I would add a new column: sequence int(10). and write php function to change/update sequence number. when i will use select query, I will order by sequence number.
I am in the process of migrating a large amount of data from several databases into one. As an intermediary step I am copying the data to a file for each data type and source db and then copying it into a large table in my new database.
The structure is simple in the new table, called migrate_data. It consists of an id (primary key), a type_id (incremented within the data type set), data (a field containing a serialized PHP object holding the data I am migrating), source_db (refers to the source database, obviously), data_type (identifies what type of data we are looking at).
I have created keys and key combinations for everything but the data field. Currently I have the data field set as a longtext column. User inserts are taking about 4.8 seconds each on average. I was able to trim that down to 4.3 seconds using DELAY_KEY_WRITE=1 on the table.
What I want to know about is whether or not there is a way to improve the performance even more. Possibly by changing to a different data column type. That is why I ask about the longtext vs text vs blob. Are any of those more efficient for this sort of insert?
Before you answer, let me give you a little more information. I send all of the data to an insert function that takes the object, runs it through serialize, then runs the data insert. It is also being done using Drupal 6 (and its db_query function).
Any efficiency improvements would be awesome.
Current table structure:
CREATE TABLE IF NOT EXISTS `migrate_data` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`type_id` int(10) unsigned NOT NULL DEFAULT '0',
`data` longtext NOT NULL,
`source_db` varchar(128) NOT NULL DEFAULT '',
`data_type` varchar(128) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `migrated_data_source` (`source_db`),
KEY `migrated_data_type_id` (`type_id`),
KEY `migrated_data_data_type` (`data_type`),
KEY `migrated_data_id__source` (`id`,`source_db`),
KEY `migrated_data_type_id__source` (`type_id`,`source_db`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 DELAY_KEY_WRITE=1;
The various text/blob types are all identical in storage requirements in PHP, and perform exactly the same way, except text fields are subject to character set conversion. blob fields are not. In other words, blobs are for when you're storing binary that MUST come out exactly the same as it went in. Text fields are for storing text data that may/can/will be converted from one charset to another.
Can anyone recommend the best practice for storing general site preferences? For example, the default page title if the script doesn't set one, or the number of featured items to display in a content box, or a list of thumbnail sizes that the system should make when a picture is uploaded. Centralizing these values has the obvious benefit of allowing one to easily alter preferences that might be used on many pages.
My default approach was to place these preferences as attribute/value pairs in a *gulp* EAV table.
This table is unlikely ever to become of a significant size, so I'm not too worried about performance. The rest of my schema is relational. It does make for some damn ugly queries though:
$sql = "SELECT name, value FROM preferences"
. " WHERE name = 'picture_sizes'"
. " OR name = 'num_picture_fields'"
. " OR name = 'server_path_to_http'"
. " OR name = 'picture_directory'";
$query = mysql_query($sql);
if(!$query) {
echo "Oops! ".mysql_error();
}
while($results = mysql_fetch_assoc($query)) {
$pref[$results['name']] = $results['value'];
}
Can anyone suggest a better approach?
In my application, I use this structure:
CREATE TABLE `general_settings` (
`setting_key` varchar(255) NOT NULL,
`setting_group` varchar(255) NOT NULL DEFAULT 'general',
`setting_label` varchar(255) DEFAULT NULL,
`setting_type` enum('text','integer','float','textarea','select','radio','checkbox') NOT NULL DEFAULT 'text',
`setting_value` text NOT NULL,
`setting_options` varchar(255) DEFAULT NULL,
`setting_weight` int(11) DEFAULT '0',
PRIMARY KEY (`setting_key`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Example data:
mysql> select * from general_settings;
+-----------------------------+---------------+------------------------------+--------------+-------------------------------+---------------------------------------+----------------+
| setting_key | setting_group | setting_label | setting_type | setting_value | setting_options | setting_weight |
+-----------------------------+---------------+------------------------------+--------------+-------------------------------+---------------------------------------+----------------+
| website_name | website | Website Name | text | s:6:"DeenTV"; | NULL | 1 |
I store a serialized value in setting_value column. I got this trick from wordpress way to save settings in database.
setting_options column is used for a select, radio, or checkbox setting_type. It will contain a serialized array value. In admin, this value will be displayed as a options, so admin can choose one of it.
Since I use CodeIgniter, I have a model to get a single value from the particular setting_key, so it's quite easy to use.
That looks fine the way you're doing it.
If you're worried that your queries are looking ugly, you could try cleaning up your SQL a bit.
Here's a cleaner version of the query you gave in your question:
SELECT name, value FROM preferences
WHERE name IN ('picture_sizes','num_picture_fields','server_path_to_http','picture_directory')";
Or perhaps create a stored function to return a preference value; for example, using a stored function like this:
DELIMITER $$
CREATE FUNCTION `getPreference` (p_name VARCHAR(50)) RETURNS VARCHAR(200)
BEGIN
RETURN (SELECT `value` FROM preferences WHERE `name` = p_name);
END $$
DELIMITER ;
You could get your preferences using a query like this:
SELECT getPreference('server_path_to_http')
You sacrifice a bit of speed by not having your preferences hard-coded (obviously). But if you plan to enable a "site administrator" to change the default preferences - you should keep them in the database.
I think that's a perfectly acceptable structure, especially for small amounts of configuration like you have.
You could also store these settings in an .ini file and call parse_ini_file. If you need a bit more flexibility than INI allows (eg: nested arrays, etc), then you could just put them all into a .php file and include that.
If you still want to go with the configuration in the database, then (given that there's only a handful of rows) perhaps just read all the records in one go and cache it.
$config = array();
$result = mysql_query("SELECT * FROM config");
while ($row = mysql_fetch_assoc($result)) {
$config[$row['name']] = $row['value'];
}
I would think that going with an included file will save you some hassle further on - especially if you ever want to include an array as one of your variables. If you plan on changing configuration variables on the fly then perhaps its better to db it, but if its going to remain relatively static I would recommend a 'config.php' file
A lot of applications, including e.g. Wordpress, make use of serialization and unserialization. It allows you to create a very simple table structure maybe with even just one record (e.g. with a site_id for your project(s)).
All your (many, many) variables in an array are serialized to a string and stored. Then fetched and unserialized back to your array structure.
Pro:
You don't have to plan perfect config structures beforehand, doing lots of ALTER TABLE stuff.
Con:
You can't search through your serialized array structure by means of SQL.
Commands:
string serialize ( mixed $value )
mixed unserialize ( string $str )
Works also with your objects. Unserializing an object can make use of the __wakeup() method.
Just create a configure class and store each value you want in variable of the class.
include this class in all files which is calling.
You can access this class in all files now and by declaring global in all function you can access the configure class.
Hope this help.
My approach to this problem is to create a table which a separate column for each config variable, just as you would with any other dataset, and to set the primary key in such a way that the table is incapable of containing more than a single entry. I do this by setting up the primary key as an enum with only one allowed value, like so:
CREATE TABLE IF NOT EXISTS `global_config` (
`row_limiter` enum('onlyOneRowAllowed') NOT NULL DEFAULT 'onlyOneRowAllowed',#only one possible value
`someconfigvar` int(10) UNSIGNED NOT NULL DEFAULT 0,
`someotherconfigvar` varchar(32) DEFAULT 'whatever',
PRIMARY KEY(`row_limiter`)#primary key on a field which only allows one possible value
) ENGINE = InnoDB;
INSERT IGNORE INTO `global_config` () VALUES ();#to ensure our one row exists
Once you have done this setup, any of the values can then be modified with a simple UPDATE statement, looked up with a simple SELECT statement, joined onto other tables to be used in more complex queries, etc.
Another benefit of this approach is that it allows for proper data types, foreign keys, and all the other things the come along with proper database design to ensure database integrity. (Just be sure to make your foreign keys ON DELETE SET NULL or ON DELETE RESTRICT rather than ON DELETE CASCADE). For example, let's say that one of your config variables is the user ID of the site's primary administrator, you could expand the example with the following:
CREATE TABLE IF NOT EXISTS `global_config` (
`row_limiter` enum('onlyOneRowAllowed') NOT NULL DEFAULT 'onlyOneRowAllowed',
`someconfigvar` int(10) UNSIGNED NOT NULL DEFAULT 0,
`someotherconfigvar` varchar(32) DEFAULT 'whatever',
`primary_admin_id` bigint(20) UNSIGNED NOT NULL,
PRIMARY KEY(`row_limiter`),
FOREIGN KEY(`primary_admin_id`) REFERENCES `users`(`user_id`) ON DELETE RESTRICT ON UPDATE CASCADE
) ENGINE = InnoDB;
INSERT IGNORE INTO `global_config` (`primary_admin_id`) VALUES (1);#assuming your DB is set up that the initial user created is also the admin
This assures that you always have a valid configuration in place, even when a configuration variable needs to reference some other entity in the database.