Dynamically update sql columns based on number of entries - php

I want to create a table like below:
id| timestamp | neighbour1_id | neighbour1_email | neighbour2_id | neighbour2_email
and so on upto max neighbour 20.
I have two questions:
Should I create columns statically or is there a way to create columns dynamically using php based on the count of json Array?
In either case, how would I refer to the columns dynamically and assign value to them based on jsonArray?
My jsonArray would look something like:
{id:123, email_id:abc, neighbours: [{neighbour1_id:234, neighbour1_email: bcd}, {neighbour2_id:345, neighbour2_email:dsf}, {}, {}...]}
Please advice. Thanks.

It looks like you need to rethink your database structure a bit. To me it looks like you need a single users (or whatever they are) table:
CREATE TABLE `users` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`email` varchar(255) NOT NULL,
`cretaed_at` timestamp NOT NULL,
PRIMARY KEY (`id`)
);
And another table that defines relations between those users:
CREATE TABLE `neighbors` (
`parent` int(11) unsigned NOT NULL,
`child` int(11) unsigned NOT NULL,
PRIMARY KEY (`parent`,`child`)
);
Now you can add as many neighbors to each user as you want. Fetching them is as easy as:
SELECT * FROM `users`
LEFT JOIN `neighbors` ON `users`.`id` = `neighbors`.`child`
WHERE `neighbors`.`parent` = ?
Where that question mark would become the id of the user from which you are fetching the neighbors, preferably by using a prepared statement.
If it is all JSON you will be working with, and querying isn't much of an issue, you could consider working with a noSql database or document store (like redis or mongoDb), but that is an entirely different story.
Just repeating a bunch of columns x times is definitely not the way to go. Vertical size (# rows) of tables in relational databases is no big issue, they are designed for that. Horizontal size (# columns) however is something to be careful with, as it may make your db uanessacry large, and decrease performance.
Just consider what you would if you want to find a user that has a neighbor with an email address [x]. You would have to repeat your where statement 20 times for each possible email column. And that is just one example...

well, the answer i was working on while pevara was posting theirs faster is almost the same...
CREATE TABLE `neighbours` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`neighbour_email` char(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
CREATE TABLE `neighbour_email_collections` (
`id` int(10) unsigned NOT NULL,
`email_id` char(64) NOT NULL,
`neighbour_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`,`neighbour_id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
insert into neighbours values (234, "bcd");
insert into neighbours values (345, "dsf");
insert into neighbour_email_collections values(123, "abc", 234);
insert into neighbour_email_collections values(123, "abc", 345);
select *
from neighbours
left join neighbour_email_collections
on neighbour_email_collections.neighbour_id=neighbours.id
where neighbour_email_collections.id=123;

Related

Mysql Full search from 2 columns in 2 tables

I have 2 tables. One is questions and the other is answers with the following format.
question(id,text,user)
answer(id,text,question_id,user)
both tables have the same number of rows obviously.
when a user searches for a phrase or a word I want it to search in both question text and answer text for that word and return the matches by most common.
I tried using the Full search of mySQL but I couldn't make it work on 2 different tables and 2 columns.
I also don't want to merge the question and answer into another table if possible.
Question table :
CREATE TABLE `questions` (
`id` int(11) NOT NULL,
`message_id` int(11) DEFAULT NULL,
`text` text NOT NULL,
`answer` int(11) DEFAULT NULL,
`status` varchar(255) NOT NULL,
`user` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `questions`
ADD PRIMARY KEY (`id`);
ALTER TABLE `questions` ADD FULLTEXT KEY `text` (`text`);
ALTER TABLE `questions`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
Answers table :
CREATE TABLE `answers` (
`id` int(11) NOT NULL,
`message_id` int(11) DEFAULT NULL,
`text` text NOT NULL,
`question` int(11) NOT NULL,
`status` varchar(255) NOT NULL,
`user` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `answers`
ADD PRIMARY KEY (`id`);
ALTER TABLE `answers` ADD FULLTEXT KEY `text` (`text`);
ALTER TABLE `answers`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
If you want to query on multiple keywords, an advice would be to store it somewhere in your application code (i.e use an explode function to separate each keyword).
Also in app code, try to generate your WHERE statement dynamically (since you cannot know in advance the number of keywords to be used):
String sql_where = " ";
for(int i = 0, i<search.length, i++){
if (i=0)
sql_where += "WHERE TEXT LIKE '%"+search[i]+"%';
else
sql_where += "\n OR TEXT LIKE '%"+search[i]+"%';
}
You will then need to query both your tables by using:
query = "SELECT ID,TEXT,'QUE' AS TYPE FROM QUESTION "+sql_where+" UNION SELECT ID,TEXT,'ANS' AS TYPE FROM ANSWER "+sql_where+";";
Note that type was added to each in order to separate the source of result row. This will help you in case you need to display the location the result was extracted from.
For the rest, I'll just explain the general idea. You will want to use the search array built earlier to compare to your result set. For each word, try to look for it in each returned row. On the side, you will create an array to store the common hits and the array index (Which will be used later). When you find a word in a row, its corresponding entry in your count array will be incremented by 1.
After you're done, all you have to is reorder the count array based on the descending order of hits. You will notice that the index id created earlier will shift, which will allow you to use it in the following stage.
For loading, you will loop on the count array, and load the result entry from result set using the index column created in the count array.
Assume the above code as a general idea, since I don't know which language you're working with
"(select id,text, 'que' as type from question WHERE text LIKE '%keyword%')
UNION
(select id,text,'ans' as type from answer WHERE text LIKE '%keyword%')";
if you have confusion that the row is selected from which table, you can check type for that.

MySQL INSERT SELECT and get primary key values

I want to know if it's possible to INSERT records from a SELECT statement from a source table into a destination table, get the INSERT ID's and UPDATE a field on all the corresponding records from source table.
Take for example, the destination table 'payments':
CREATE TABLE `payments` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`txid` TEXT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`worker` INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
)
The source table 'log':
CREATE TABLE `log` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`solution` VARCHAR(80) NOT NULL,
`worker` INT(11) NOT NULL,
`amount` DECIMAL(16,8) NOT NULL DEFAULT '0.00000000',
`pstatus` VARCHAR(50) NOT NULL DEFAULT 'pending',
`payment_id` INT(10) UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
The "log" table contains multiple "micro-payments" for a completed task. The purpose of the "payments" table is to consolidate the micro-payments into one larger payment:
INSERT INTO payments p (amount, worker)
SELECT SUM(l.amount) AS total, l.worker FROM log l
WHERE l.pstatus = "ready"
AND l.payment_id IS NULL
AND l.amount > 0
GROUP BY l.worker
I'm not sure if clear from the code above, but I would like the field "payment_id" to be given the value of the insert id so that it's possible to trace back the micro-payment to the larger consolidated payment.
I could do it all client side (PHP), but I was wondering if there was some magical SQL query that would do it for me? Or maybe I am going about it all wrong.
You can use mysql_insert_id() to get the id the inserted record.
See mysql_insert_id()
But the above function is deprecated.
If you're using PDO, use PDO::lastInsertId.
If you're using Mysqli, use mysqli::$insert_id.
Well, the linking column between the tables is the column worker. After you inserted your values, just do
UPDATE log l
INNER JOIN payments p ON l.worker = p.worker
SET l.payment_id = p.id;
and that's it. Or did I get the question wrong? Note, that the columns differ in the attribute signed/unsigned. You might want to change that.
I think you should use ORM in php as follows:
Look into Doctrine.
Doctrine 1.2 implements Active Record. Doctrine 2+ is a DataMapper ORM.
Also, check out Xyster. It's based on the Data Mapper pattern.
Also, take a look at DataMapper vs. Active Record.

Best way to archive about 100 values from one table to another in MySQL?

I have a table containing the balances of about 100 accounts (it's variable). One record per account. The balances are continually updated but I would like to find the best way to archive the current balance each day.
I'm looking for the most efficient way to do this.
Table schemas:
-- --------------------------------------------------------
--
-- Table structure for table `acc_bals`
--
-- Holds the tracks the balances of all coa's and bank accounts
CREATE TABLE IF NOT EXISTS `acc_bals` (
`id` INT(11) NOT NULL auto_increment,
`acc_type` TINYINT(4) NOT NULL comment '1 - coa; 2 - bank accounts',
`acc_id` SMALLINT(5) NOT NULL,
`acct_balance` VARBINARY(40) NOT NULL,
PRIMARY KEY (id)
) engine=InnoDB DEFAULT charset=utf8 auto_increment=1;
--
-- Table structure for table `balance_archive`
--
CREATE TABLE IF NOT EXISTS `balance_archive` (
`id` mediumint(6) unsigned NOT NULL AUTO_INCREMENT,
`date` date NOT NULL COMMENT 'Beginning of the day for which this value was archived for..',
`coa_id` smallint(4) unsigned NOT NULL COMMENT 'Foreign ID of COA.',
`bal` varbinary(27) NOT NULL COMMENT 'Archived COA balance at beginning of specified date.',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
The reason for the varbinary columns is because the balances are encrypted.
I was originally thinking to query acc_bals and put all the account id's and values into an array, having decrypted the values, and then run a second query and for each item in the array, copy it into the archive table.
It then occurred to me that I probably do not need to decrypt the values at all which would save a lot of processing and then further more, it might be possible to do this in a single query?
If my approach seems right, perhaps someone can suggest how that query might look please?
I'm using MySQL PDO.
Simple select into will do in a recurring event set for each day.
Something like :
insert into balance_archive (date,coa_id,bal)
select now(),ab.id,ab.acct_balance
from acc_bals ab
That way you don't need to use PHP at all, you can do it with MySql only.

Mysql storing lots of bit sized settings

I have ~38 columns for a table.
ID, name, and the other 36 are bit-sized settings for the user.
The 36 other columns are grouped into 6 "settings", e.g. Setting1_on, Setting1_colored, etc.
Is this the best way to do this?
Thanks.
If it must be in one table and they're all toggle type settings like yes/no, true/false, etc... use TINYINT to save space.
I'd recommend creating a separate table 'settings' with 36 records one for each option. Then create a linking table to the user table with a value column to record the user settings. This creates a many-to-many link for the user settings. It also makes it easy to add a new setting--just add a new row to the 'settings' table. Here is an example schema. I use varchar for the value of the setting to allow for later setting which might not be bits, but feel free to use TINYINT if size is an issue. This solution will not use as much space as the one table with the danger of a large sparsely populated set of columns.
CREATE TABLE `user` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
`address` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(64) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `setting_user` (
`user_id` int(11) NOT NULL DEFAULT '0',
`setting_id` int(11) unsigned NOT NULL,
`value` varchar(32) DEFAULT NULL,
PRIMARY KEY (`user_id`,`setting_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
All depends on how you want to access them. If you want to (or must) just select one of them, then go with the #Ray solution. If they can be functionally grouped (really, not some pretend grouping for all those that start with F) ie. you'll always need number of them for a function and reading and writing them doesn't make sense as an individual flag, then perhaps storing them as ints and using logic operaoprs on them might be a goer.
Saying that, unless you are doing a lot of read and writes to the db during a session, bundling them up into ints gives you very little performance wise, it would save some space on the DB, if all the options had to exist. If doesn't exist = false, it could be a toss up.
So all things being unequal, I'd go with Mr Ray.
MySQL has a SET type that could be useful here. Everything would fit into a single SET, but six SETs might make more sense.
http://dev.mysql.com/doc/refman/5.5/en/set.html

MySQL + PHP: select multiple rows on a join, then update those rows/insert new ones

I want to do the following:
Select multiple rows on an INNER JOIN between two tables.
Using the primary keys of the returned rows, either:
Update those rows, or
Insert rows into a different table with the returned primary key as a foreign key.
In PHP, echo the results of step #1 out, ideally with results of #2 included (to be consumed by a client).
I've written the join, but not much else. I tried using a user-defined variable to store the primary keys from step #1 to use in step #2, but as I understand it user-defined variables are single-valued, and my SELECT can return multiple rows. Is there a way to do this in a single MySQL transaction? If not, is there a way to do this with some modicum of efficiency?
Update: Here are the schemas of the tables I'm concerned with (names changed, 'natch):
CREATE TABLE IF NOT EXISTS `widgets` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`author` varchar(75) COLLATE utf8_unicode_ci NOT NULL,
`text` varchar(500) COLLATE utf8_unicode_ci NOT NULL,
`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated` timestamp
NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
CREATE TABLE IF NOT EXISTS `downloads` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`widget_id` int(11) unsigned NOT NULL,
`lat` float NOT NULL,
`lon` float NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
I'm currently doing a join to get all widgets paired with their downloads. Assuming $author and $batchSize are php vars:
SELECT w.id, w.author, w.text, w.created, d.lat, d.lon, d.date
FROM widgets AS w
INNER JOIN downloads AS d
ON w.id = d.widget_id
WHERE w.author NOT LIKE '$author'
ORDER BY w.updated ASC
LIMIT $batchSize;
Ideally my query would get a bunch of widgets, update their updated field OR insert a new download referencing that widget (I'd love to see answers for both approaches, haven't decided on one yet), and then allow the joined widgets and downloads to be echoed. Bonus points if the new inserted download or updated widgets are included in the echo.
Since you asked if you can do this in a single Mysql transaction I'll mention cursors. Cursors will allow you to do a select and loop through each row and do the insert or anything else you want all within the db. So you could create a stored procedure that does all the logic behind the scenes that you can call via php.
Based on your update I wanted to mention that you can have the stored procedure return the new recordset or an I'd, anything you want. For more info on creating stored procedures that return a recordset with php you can check out this post: http://www.joeyrivera.com/2009/using-mysql-stored-procedure-inout-and-recordset-w-php/

Categories