Laravel: Split model across two tables - php

I have a model titles
My titles DB table is getting rather large 100,000+ rows of data.
What is the best way to create a new titles2 table and join it with the titles table so that they can both be the same titles model?
Also if I finish adding rows to titles at id 100,000 should i start incrementing titles2 at id 100,001 to that the ids will always be unique?

You can add another column ex. source to differentiate these two table records. This way you can make sure that there's no way we will run into the problem of mixing these two datasets.
However for conveniency, we still want to make sure the id itself is unique across different dataset, which requires you not to use auto-increment for id field, because you'll gonna be the one to handle the id generation.
Some of the issues that I run into includes using the following line to update or insert the record,
$db->table('users')->insert($record);
instead of
User::create($record); // assume you are using auto-increment ids

Related

Is there a way to add multiple values with same ID in two different tables?

Using PHP and mySQL, I need to add multiple values in 2 mysql tables : the first table would have the more important informations and the second one would have the less important informations about each items.
To be more clear, ONE element would have his informations split in two tables.
(I need this for some reasons but two of them are : having the first table the less weight possible, and the second table would store datas that will be erase after a short time (meanwhile the first table keeps all the datas it stored).)
In the best scenario, I'd like to add a row in each table about one item/element with the same id in each table. Something like this :
Table 1 id|data_1_a|data_1_b|...
Table 2 id|data_2_a|data_2_b|...
So if I add an element which get the ID "12345" in the table 1, it adds the datas in the table 2 with the same ID "12345".
To achieve this, I think of two solutions :
Create the ID myself for each element (instead of having an auto_increment on table 1). The con is that it would probably be better to check if the ID doesn't already exist in the tables everytime I generate an ID...
Add the element on table 1, get its ID with $db->lastInsertId(); and use it to add the element's datas on table 2. The con is that I have to add one element by one element to get all the IDs, while most of the time I want to add a lot of elements (like one, two or three hundreds !) at once
Maybe there's a better way to achieve this ?
lastInsertId() reports the first value generated by the last INSERT statement executed. It's reliable to assume that when you insert many rows, they are given consecutive id values following that first value. For example, the MySQL JDBC driver relies on this assumption, so it can report the set of id values generated.
This assumption breaks only if you deliberately set innodb_autoinc_lock_mode=2 (interleaved). See https://dev.mysql.com/doc/refman/8.0/en/innodb-auto-increment-handling.html for details about that.
But if it were my task, I would still choose to use a single table. When you find you don't need some of the columns anymore, use UPDATE to set them to NULL. This will eliminate the problems you're facing with assuring the same id is used across two tables.

Merging Two Databases - How to Skip Same ID or Generate New ID

I have two MySQL databases. I would like to data from one database to another. Both have the same structure and entries except that one database has same IDs for different items within the same tables. I don't want to replace the data from the old to the new database. If IDs are there, I would like the new database to skip it. If it's a duplication, I would like a new ID to be generated.
I'd like to use phpmyadmin for this but have no idea if this is even possible.
0.) Make backup of both tables
PHPMYADMIN will be sufficient for your request.
First you need to ensure there is no duplicating id's or primary keys.
Assuming two tables testtable1 and testtable2 have columns testtable_id, name
1.) firstly you would make query on second table
UPDATE testtable2 SET testtable2.testtable_id = testtable2.testtable_id + (SELECT MAX( testtable1.testtable_id ) FROM testtable1);
2.) Than, again in testtable2, there is tool Copy table to (database.table): under Operations menu, set DB name and testtable1 name (db name should be already set), select Data only radio button option and click Go. 3.) Now, you have all data from both tables in testtable1.
Edit. Firstly I thought it is matter of two tables in same database. But nevertheless you can use step two for rest of the tables too. Just set correct DB and table name in step two. Also, before that, set query so expecting ID to be higher than MAX ID of table you want to extend. You can hard code parenthesis part with exact number of MAX ID first DB corresponding table.

Inserting a blank row into a table to generate a foreign key or better way?

Trying to store the terms of a search. I've created a table "Searches" which stores each of the search terms in it's own field (bedrooms, baths, etc). So each row will contain one search.
On the advanced search form, users can select multiple search terms for a single field using an option select. I thought it would be wise to store each of these terms in a unique row of a related table for easy statistics reporting. I thought this way I could quickly report how many times a term is searched for. I also need to have the ability to save and regenerate the search query.
However if none of the terms searched are in the main table, I still need to generate a unique id to link it to the related table. So I would need to insert a blank row to generate the foreign key which I'm reluctant to do.
Is there a better way? I could store the multiple search terms questions in the primary table comma separated but it seems like it would be more difficult to pull them back out and count for statistics etc.
Why do you need to insert a blank row? You don't need to persist any of the records until the time comes to persist all of the records, right?
So as I understand it, your table layout is something like:
Table1
--------
ID
etc.
Table2
--------
ID
Table1ID
etc.
If that's the case, then the order of operations for inserting the data would look like this:
Begin Transaction
Insert into Table1
Get the last inserted ID
Insert into Table2
Commit Transaction
Assuming I understand your UX correctly, this would all happen when the user submits the form.
if i understand you,
it seems like you should have two tables:
search_term
-----------------
term_id
term
and
search
-----------------
search_id
term_id
then you can query search for all the terms and issue the SELECT statement.

Which is the best method for efficiency/less work load on server

I am curious to know what would be the preferred method for a table which would store a list of IDs associated with a UserID.
Should I have one field storing a serialized array of all the id's (one row per UserID)
Or one row for every id for every player? Is the answer clear cut or are there factors to take into account when making such a decision?
Hope you can help explain the best choice.
if you're going to get the whole list of IDs everytime or most of the time, or if the lists of IDs are short, then serializing them is fine (though I wouldn't recommend it anyway)
However you'll be missing out on indexing if you ever need to search for a user using an ID in the list of IDs as a constraint.
Best way is to have a separate table storing the many-to-many relations (which is what I'm assuming you want). If you have a one-to-many relationship, then you should reverse the way that you are referencing the relation and add a user_id column to the table containing the IDs .
I would keep one row for every id for every player. That to me is more manageable and clean approach. How many records are you looking at here though ?

Query to fetch 1 row in table, and many rows from another table, referenced in that row

So I have 2 tables in my database, they are 'workouts' and 'exercises'. Workouts contains a row called exercises which is a comma-separated list of exercise IDs - from the 'exercises' table e.g. '1,2,3'.
My question is, can I write a single query to allow me to select a row from the workouts table, say one with an id of 1, and have MySQL fetch each of the exercises from the list in that row, returning them within the 'workout' row?
At the moment I'm using PHP to select the workout row, and then making individual requests for each of the exercises, resulting in serious inefficiency.
I took a look at Joining rows as array from another table for each row and also did some research into the group_concat() function, but I'm not sure that's what I'm after.
Update
Here are the 2 tables:
IMO, the best approach is to redesign your schema to have a cross-reference table called exercises_workouts (or something similar). Remove the CSV field.
Here's page that goes into more detail on implementing a many-to-many relationship:
http://www.tonymarston.net/php-mysql/many-to-many.html
Note: The linked page uses the mysql_* functions, but the general explanation of the approach stands. You'll want to look into PDO for database access.

Categories