php mysql update all fields - php

Sorry for asking a trivial question. I want to translate some of the fields of my database which has one million rows. So what I want to do is
to read field 1 and perform the translate function and write it to field 3 and respectively field 2 needs to be written into field 4.
initial table
field id|field 1 |field 2 |field 3|field 4|
1 | apple | pear | empty |empty |
2 | banana | pineapple | empty |empty |
end result table translate(apple) - yabloko
field id|field 1 |field 2 |field 3|field 4|
1 | apple | pear | yablogo |grusha |
2 | banana | pineapple | banan |ananas |
I already have the translate function, the question is how to perform
this on all one million rows. How to construct the loop through it correctly? (surely there are some IDs missing, as some of the data was removed).
thank you so much in advance!!!

Rather than "construct a loop" and process row by row, the normative pattern would be to perform the operation in a single statement.
I'd populate a translation table:
CREATE TABLE my_translation
( old_word VARCHAR(100) NOT NULL PRIMARY KEY
, new_word VARCHAR(100)
) Engine=InnoDB;
INSERT INTO my_translation (old_word, new_word) VALUES
('apple' ,'yablogo')
,('pear' ,'grush')
,('banana' ,'banan')
,('pineapple','ananas);
Then do an update. The tricky part is leaving field_3 and field_4 unmodified if there's no match.
UPDATE my_table t
LEFT
JOIN my_translation c3
ON c3.old_word = t.field_1
LEFT
JOIN my_translation c4
ON c4.old_word = t.field_2
SET t.field_3 = IF(c3.old_word IS NULL,t.field_3,c3.new_word)
, t.field_4 = IF(c4.old_word IS NULL,t.field_4,c4.new_word)
NOTE: If this is a one-time operation, I might consider doing this as an INSERT into a new table, and then swapping the table names and changing foreign key references, to put the new table in place of the old table.

Related

Replace value with Foreign Key's result

I have a table with all my invoice items as packages:
Table: invoice_items
invoice_item_id | package_id | addon_1 | addon_2 | addon_3 | ...
----------------|------------|---------|---------|
1 | 6 | 2 | 5 | 3 |
Then my other table:
Table: addons
addon_id | addon_name | addon_desc |
----------|--------------|--------------------------|
1 | Dance Lights | Brighten up the party... |
2 | Fog Machine | Add some fog for an e... |
Instead of taking up space storing the addon name in my invoice_items table, I'd like to just include the addon_id in the addon_1, addon_2, etc columns.
How do I then get the name of the addon when doing a query for invoice_item rows?
Right now I just have it programmed into the page that if addon_id == 1, echo "Dance Lights", etc but I'd like to do it in the query. Here is my current query:
$invoice_items_SQL = "
SELECT invoice_items.*, packages.*
FROM `invoice_items`
INNER JOIN packages ON invoice_items.invoice_item_id = packages.package_id
WHERE `event_id` = \"$event_id\"
";
So I'm able to do this with packages, but only because there's just one package_id per row, but there are up to 9 addons :(
The most direct way of doing it is to join onto the table multiple times. That's a bit naff though because you'll write almost the same thing 9 times.
Another, better way would be to restructure your tables - you need another table with 2 data columns: invoice_id and addon_id. You then need either an auto-inc primary column, or use both of those existing columns as a dual primary key. So this is a many-to-many junction table.
From there you can can query without having 9 repetitive joins, but you will get a row of each package for every addon it has (so if it has three addons it will appear three times in the results). And then from there you can use GROUP_CONCAT to concatenate the names of the addons into a single field so that you only get one row per invoice.

append more than one value in single row using php

I have a table called user having 3 columns namely id, name and phone no.
i want insert data like below clip.
+----+---------------+---------------------+-
| id | name | phone no |
+----+---------------+---------------------+-
| 1 | mahadev | +91 XXXXX |
| 2 | swamy | +91 YYYYY |
| | | +91 ZZZZZ |
| 3 | charlie | +91 AAAAA |
| | | |
+----+---------------+---------------------+-
Here question is how can i add more than one values (one by one) to same row as showing id = 2 in above clip.
Could anyone please help me on this?
Thanks in advance.
You cannot do what you intended, how you intended. And for a reason.
One possible solution (bad), would be to make id non-unique and then insert two times id 2, name swamy, phone for two different phones.
Proper solution is to have two tables. One is your current user, which would have only id and name.
Second table is phone_numbers which would have user_id and phone_no. Primary key on that table would be composite of user_id and phone_no so it would prevent duplicates. Then in that table you can insert as many numbers as you need.
In your example you would have two rows with user_id=2, one for each phone number.
Then it is only a matter JOIN to join the two tables together and display your results.
SQL architecture don't allow such things. You need to use more than one row or you can use more than one table with foreign keys. Or you can serialize(phone no) before you put it into mysql.
One possible solution could be creating an array of that data and then storing it with serialize() function.
Small example:
$phones_array = array('phone_a' => '+91 YYYYY', 'phone_b' => '+91 ZZZZZ');
serialize($phones_array);
Now your data are serialized into a string, trying var_dump($phones_array) you should get:
string 'a:2:{s:7:"phone_a";s:9:"+91 YYYYY";s:7:"phone_b";s:9:"+91 ZZZZZ";}' (length=66)
You can now insert this value into your table
You can retrieve this data with:
unserialize($phones_array);

Change existing row ID based on AUTO_INCREMENT (unique key)

I have a table that records tickets that are separated by a column that denotes the "database". I have a unique key on the database and cid columns so that it increments each database uniquely (cid has the AUTO_INCREMENT attribute to accomplish this). I increment id manually since I cannot make two AUTO_INCREMENT columns (and I'd rather the AUTO_INCREMENT take care of the more complicated task of the uniqueness).
This makes my data look like this basically:
-----------------------------
| id | cid | database |
-----------------------------
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 2 | 2 |
-----------------------------
This works perfectly well.
I am trying to make a feature that will allow a ticket to be "moved" to another database; frequently a user may enter the ticket in the wrong database. Instead of having to close the ticket and completely create a new one (copy/pasting all the data over), I'd like to make it easier for the user of course.
I want to be able to change the database and cid fields uniquely without having to tamper with the id field. I want to do an UPDATE (or the like) since there are foreign key constraints on other tables the link to the id field; this is why I don't simply do a REPLACE or DELETE then INSERT, as I don't want it to delete all of the other table data and then have to recreate it (log entries, transactions, appointments, etc.).
How can I get the next unique AUTO_INCREMENT value (based on the new database value), then use that to update the desired row?
For example, in the above dataset, I want to change the first record to go to "database #2". Whatever query I make needs to make the data change to this:
-----------------------------
| id | cid | database |
-----------------------------
| 1 | 3 | 2 |
| 2 | 1 | 2 |
| 3 | 2 | 2 |
-----------------------------
I'm not sure if the AUTO_INCREMENT needs to be incremented, as my understanding is that the unique key makes it just calculate the next appropriate value on the fly.
I actually ended up making it work once I re-read an except on using AUTO_INCREMENT on multiple columns.
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a
secondary column in a multiple-column index. In this case, the
generated value for the AUTO_INCREMENT column is calculated as
MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is
useful when you want to put data into ordered groups.
This was the clue I needed. I simply mimic'd the query MySQL runs internally according to that quote, and joined it into my UPDATE query as such. Assume $new_database is the database to move to, and $id is the current ticket id.
UPDATE `tickets` AS t1,
(
SELECT MAX(cid) + 1 AS new_cid
FROM `tickets`
WHERE database = {$new_database}
) AS t2
SET t1.cid = t2.new_cid,
t1.database = {$new_database}
WHERE t1.id = {$id}

Storing variable number of values of something in a database

I'm developing a QA web-app which will have some points to evaluated assigned to one of the following Categories.
Call management
Technical skills
Ticket management
As this aren't likely to change it's not worth making them dynamic but the worst point is that points are like to.
First I had a table of 'quality' which had a column for each point but then requisites changed and I'm kinda blocked.
I have to store "evaluations" that have all points with their values but maybe, in the future, those points will change.
I thought that in the quality table I could make some kind of string that have something like that
1=1|2=1|3=2
Where you have sets of ID of point and punctuation of that given value.
Can someone point me to a better method to do that?
As mentioned many times here on SO, NEVER PUT MORE THAN ONE VALUE INTO A DB FIELD, IF YOU WANT TO ACCESS THEM SEPERATELY.
So I suggest to have 2 additional tables:
CREATE TABLE categories (id int AUTO_INCREMENT PRIMARY KEY, name VARCHAR(50) NOT NULL);
INSERT INTO categories VALUES (1,"Call management"),(2,"Technical skills"),(3,"Ticket management");
and
CREATE TABLE qualities (id int AUTO_INCREMENT PRIMARY KEY, category int NOT NULL, punctuation int NOT nULL)
then store and query your data accordingly
This table is not normalized. It violates 1st Normal Form (1NF):
Evaluation
----------------------------------------
EvaluationId | List Of point=punctuation
1 | 1=1|2=1|3=2
2 | 1=5|2=6|3=7
You can read more about Database Normalization basics.
The table could be normalized as:
Evaluation
-------------
EvaluationId
1
2
Quality
---------------------------------------
EvaluationId | Point | Punctuation
1 | 1 | 1
1 | 2 | 1
1 | 3 | 2
2 | 1 | 5
2 | 2 | 6
2 | 3 | 7

MySQL design with dynamic number of fields

My experience with MySQL is very basic. The simple stuff is easy enough, but I ran into something that is going to require a little more knowledge. I have a need for a table that stores a small list of words. The number of words stored could be anywhere between 1 to 15. Later, I plan on searching through the table by these words. I have thought about a few different methods:
A.) I could create the database with 15 fields, and just fill the fields with null values whenever the data is smaller than 15. I don't really like this. It seems really inefficient.
B.) Another option is to use just a single field, and store the data as a comma separated list. Whenever I come back to search, I would just run a regular expression on the field. Again, this seems really inefficient.
I would hope there is a good alternative to those two options. Any advice would be very appreciated.
-Thanks
C) use a normal form; use multiple rows with appropriate keys. an example:
mysql> SELECT * FROM blah;
+----+-----+-----------+
| K | grp | name |
+----+-----+-----------+
| 1 | 1 | foo |
| 2 | 1 | bar |
| 3 | 2 | hydrogen |
| 4 | 4 | dasher |
| 5 | 2 | helium |
| 6 | 2 | lithium |
| 7 | 4 | dancer |
| 8 | 3 | winken |
| 9 | 4 | prancer |
| 10 | 2 | beryllium |
| 11 | 1 | baz |
| 12 | 3 | blinken |
| 13 | 4 | vixen |
| 14 | 1 | quux |
| 15 | 4 | comet |
| 16 | 2 | boron |
| 17 | 4 | cupid |
| 18 | 4 | donner |
| 19 | 4 | blitzen |
| 20 | 3 | nod |
| 21 | 4 | rudolph |
+----+-----+-----------+
21 rows in set (0.00 sec)
This is the table I posted in this other question about group_concat. You'll note that there is a unique key K for every row. There is another key grp which represents each category. The remaining field represents a category member, and there can be variable numbers of these per category.
What other data is associated with these words?
One typical way to handle this kind of problem is best described by example. Let's assume your table captures certain words found in certain documents. One typical way is to assign each document an identifier. Let's pretend, for the moment, that each document is a web URL, so you'd have a table something like this:
CREATE TABLE WebPage (
ID INTEGER NOT NULL,
URL VARCHAR(...) NOT NULL
)
Your Words table might look something like this:
CREATE TABLE Words (
Word VARCHAR(...) NOT NULL,
DocumentID INTEGER NOT NULL
)
Then, for each word, you create a new row in the table. To find all words in a particular document, select by the document's ID:
SELECT Words.Word FROM Words, WebPage
WHERE Words.DocumentID = WebPage.DocumentID
AND WebPage.URL = 'http://whatever/web/page/'
To find all documents with a particular word, select by word:
SELECT WebPage.URL FROM WebPage, Words
WHERE Words.Word = 'hello' AND Words.DocumentID = WebPage.DocumentID
Or some such.
Hurpe, is the scenario you are describing that you will have a database table with a column that can contain a up to 15 keywords. Later you will use these keywords to search the table which will presumably have other columns as well?
Then isn't the answer to have a separate table for the keywords? You will also need to have a many-to-many relationship between the keywords and the main table.
So using cars as an example, the WORD table that will store the 15 or so keywords would have the following structure:
ID int
Word varchar(100)
The CAR table would have a structure something like:
ID int
Name varchar(100)
Then finally you need a CAR_WORD table to hold the many-to-many relationships:
ID int
CAR_ID int
WORD_ID int
And sample data to go with this for the WORD table:
ID Word
001 Family
002 Sportscar
003 Sedan
004 Hatchback
005 Station-wagon
006 Two-door
007 Four-door
008 Diesel
009 Petrol
together with sample data for the CAR table
ID Name
001 Audi TT
002 Audi A3
003 Audi A4
then the intersection CAR_WORD table sample data could be:
ID CAR_ID WORD_ID
001 001 002
002 001 006
003 001 009
which give the Audi TT the correct characteristics.
and finally the SQL to search would be something like:
SELECT c.name
FROM CAR c
INNER JOIN CAR_WORD x
ON c.id = x.id
INNER JOIN WORD w
ON x.id = w.id
WHERE w.word IN('Petrol', 'Two-door')
Phew! Didn't intend to set out to write quite so much, it looks complicated but it is where I always seem to end up however hard I try to simplify things.
I would create a table with and ID and one field, then store your results as multiple records. This offers many benefits. For example, you can then programatically enforce your 15 word limit instead of doing it in your design, so if you ever change your mind it should be rather easy. Your queries to search on the data will also be much faster to run, regular expressions take a lot of time to run (comparatively). Plus using a varchar for the field will allow you to compress your table much better. And indexing on the table should be much easier (more efficient) with this design.
Do the extra work and store the 15 words as 15 rows in the table, i.e. normalize the data. It may require you to re-think your strategy a bit, but trust me when the client comes along and says "Can you change that 15 limit to 20...", you'll be glad you did.
Depending on exactly what you want to accomplish:
Use a full-text index on your string table
Three tables: one for the original string, one for unique words (after word-rooting?), and a join table. This would also let you do more complicated searches, like "return all strings containing at least three of the following five words" or "return all strings where 'fox' occurs after 'dog'".
CREATE TABLE string (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
string TEXT NOT NULL
)
CREATE TABLE word (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
word VARCHAR(14) NOT NULL UNIQUE,
UNIQUE INDEX (word ASC)
)
CREATE TABLE word_string (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
string_id INT NOT NULL,
word_id INT NOT NULL,
word_order INT NOT NULL,
FOREIGN KEY (string_id) REFERENCES (string.id),
FOREIGN KEY (word_id) REFERENCES (word.id),
INDEX (word_id ASC)
)
// Sample data
INSERT INTO string (string) VALUES
('This is a test string'),
('The quick red fox jumped over the lazy brown dog')
INSERT INTO word (word) VALUES
('this'),
('test'),
('string'),
('quick'),
('red'),
('fox'),
('jump'),
('over'),
('lazy'),
('brown'),
('dog')
INSERT INTO word_string ( string_id, word_id, word_order ) VALUES
( 0, 0, 0 ),
( 0, 1, 3 ),
( 0, 2, 4 ),
( 1, 3, 1 ),
( 1, 4, 2 ),
( 1, 5, 3 ),
( 1, 6, 4 ),
( 1, 7, 5 ),
( 1, 8, 7 ),
( 1, 9, 8 ),
( 1, 10, 9 )
// Sample query - find all strings containing 'fox' and 'quick'
SELECT
UNIQUE string.id, string.string
FROM
string
INNER JOIN word_string ON string.id=word_string.string_id
INNER JOIN word AS fox ON fox.word='fox' AND word_string.word_id=fox.id
INNER JOIN word AS quick ON quick.word='quick' AND word_string.word_id=word.id
You are correct that A is no good. B is also no good, as it fails to adhere to First Normal Form (each field must be atomic). There's nothing in your example that suggests you would gain by avoiding 1NF.
You want a table for your list of words with each word in its own row.

Categories