I have 5 tables with data.
I can drag and drop rows and reassign their positions. I am grabbing the ID of the DB record they belong to.
I want to be able to update it from table one to table 2 on the database. I was told not to use AJAX because that handles updating records in the front end and it could be dangerous. I dont know what route to go for or even how to manage it.
I am using Laravel 5.3
Related
I have already indexed a database (SQL) with a single table which is in sync with its Elasticsearch index. Now I want to index a database with multiple normalized tables. So, how should I index those tables? Should I write multiple JOIN queries in my logstash file during indexing the database tables, or should I index each table one by one and perform multiple index search? But for second way, I do not know how to form elasticsearch query for the relevant SQL queries. I am new to Elasticsearch. So any guidance for the problem would be appreciated. Here I am also attaching the schema of the database. One more thing, I am using PHP client for searching and displaying data.
First of all, everything will depend on how you want to build your indexes in elasticsearch, that is, if you want an index for each table or an index for several tables.
My advice is:
Create a trigger in the database to audit every change (insert, update, delete) and store it in a news table along with the action and a state.
Create a view for each type of novelty or table, this view will depend on how you want to index everything.
Use jdbc to call the views says the state is equal to pending (raw).
Use filters to normalize your data and adjust it to your elasticsearch structure.
Use JDBC output to update the database by setting the newness to processed to prevent it from appearing in the query.
In addition to these points, my recommendation is that you have those tables in a single index, for example, employee, where you can create different nested objects for each entity in the database, such as department, etc. where you could add the code tags and description of each one. You can check if it has not been clear 😀
I have a MySQL database that is becoming really large. I can feel the site becoming slower because of this.
Now, on a lot of pages I only need a certain part of the data. For example, I store information about users every 5 minutes for history purposes. But on one page I only need the information that is the newest (not the whole history of data). I achieve this by a simple MAX(date) in my query.
Now I'm wondering if it wouldn't be better to make a separate table that just stores the latest data so that the query doesn't have to search for the latest data from a specific user between millions of rows but instead just has a table with only the latest data from every user.
The con here would be that I have to run 2 queries to insert the latest history in my database every 5 minutes, i.e. insert the new data in the history table and update the data in the latest history table.
The pro would be that MySQL has a lot less data to go through.
What are common ways to handle this kind of issue?
There are a number of ways to handle slow queries in large tables. The three most basic ways are:
1: Use indexes, and use them correctly. It is important to avoid table scans on large tables; this is almost always your most significant performance hit with single queries.
For example, if you're querying something like: select max(active_date) from activity where user_id=?, then create an index on the activity table for the user_id column. You can have multiple columns in an index, and multiple indexes on a table.
CREATE INDEX idx_user ON activity (user_id)
2: Use summary/"cache" tables. This is what you have suggested. In your case, you could apply an insert trigger to your activity table, which will update the your summary table whenever a new row gets inserted. This will mean that you won't need your code to execute two queries. For example:
CREATE TRIGGER update_summary
AFTER INSERT ON activity
FOR EACH ROW
UPDATE activity_summary SET last_active_date=new.active_date WHERE user_id=new.user_id
You can change that to check if a row exists for the user already and do an insert if it is their first activity. Or you can insert a row into the summary table when a user registers...Or whatever.
3: Review the query! Use MySQL's EXPLAIN command to grab a query plan to see what the optimizer does with your query. Use it to ensure that the optimizer is avoiding table scans on large tables (and either create or force an index if necesary).
I have a table called users and a table called pages. Users of the system can subscribe to a page and receive updates about the page. My problem is that users and pages will be updated dynamically (ie. no manual intervention to the tables) and I don't want to keep adding another column everytime someone subscribes to the page.
How can I achieve updating both the users table and the pages table dynamically to reflect that they have subscribed to that page?
My idea would be to add an comma separated array of usernames into the pages table and update them as users subscribe/unsubscribe.
Just making it an official answer:
While the initial hunch may be to use comma separated values to represent the link between those 2 tables (or any other way of saving the data in one column like saving a json string), it is actually bad practice because it does not conform to the First Normal Form (and definitely not 2nd and 3rd).
First Normal Form - Wikipedia
First Normal Form says you should never store more than 1 value in 1 table cell.
The problem, in short, starts when you'll need to use that data, which will actually take you at least 2 actions - 1 is reading the data from the database and 2nd is to parse it in your languaging script. Imagine what happens when you need then to use that data to read some other data from the database - you are making more sql queries than you need and taking at least twice the time (+resources). It becomes even more complicated when you need to use JOIN queries or have other one-to-many data relationships.
The solution then is simple - you need to create a 3rd table that serves as an intermediate table.
You can call it users_pages or user2pages and that represents the 1 to many relationship between 1 user and many pages.
The structure of the table is as simple as:
users_pages
-----------
-- id // a unique id for the relationship, can be auto generated
-- user_id // the user id
-- page_id // the page id
-----------
This allows you to build a more robust application as well as run advanced queries and calculations without the need to parse the data in your script (i.e count amount of pages each user is subscribed to, or amount of users subscribed to 1 page).
Unsubscribing can be also much easier this way since you don't need to read the users or pages table at all. You simply delete the relation from the users_pages table.
Without it, you will need to (a) first read the users table (b) get the pages data comma separated (c) parse the data and remove the specific page from it (d) save the new data again to the database. That's 4 actions and 2 SQL queries...
I hope this helps!
Hello this is my first time i post but hopefully i won't mess up to much.
Basically i'm trying to to copy two tables into a new table, the data in table 2 and 3 are temp data that i update with two csv files. It's just basic data that share the same ID so thats the Primary Key and i want these to be combined into a new table. This is supposed to be done just once a day handling about 2000 lines Below follows a better description of what i'm looking for.
3 tables, Core, temp_data1, temp_data2
temp_data1 has id, name, product
temp_data2 has id, description
id is a unique since it's the product_nr of the product
First copy the data from temp_data1 to Core. Insert new line if the product does not exist, if it do exist it should update the row with the information
Next update Core with the description where id=id and do not insert if id do not exist (it should not exist)
I'm looking for something that can be done in one push of a button, first i upload the csv file into the two different databases (two different files) next i push a button to merge the two tables to the Core one. I know you can do this right away with the two csv files and skip the two tables but i feel like that is so over my head it's not even funny.
I can handle programming php it's all the mysql stuff that's messing with my head.
Hopefully you guys can help me and in return i will help out any other place i can.
Thanks in advance.
I'm not sure I understand it correctly, but this can be done using only sql script, using INSERT INTO...SELECT...ON DUPLICATE KEY UPDATE... - see http://dev.mysql.com/doc/refman/5.6/en/insert-select.html
Hello
I'll explain to you with an example.
If an admin updates the admin table, then some data is automatically added to the stock table. And if a client submits a form to the sales table, then also some parts of the data update automatically in the stock table.
I mean I want some contents of the admin and sales tables in the stock table. I want to do the process with PHP.
You want to use either triggers, as Cybernate mentioned, or stored procedures to combine the two operations.
Triggers - http://dev.mysql.com/doc/refman/5.0/en/triggers.html
Stored Procedures - http://dev.mysql.com/tech-resources/articles/mysql-storedproc.html
If you want to do it in PHP only, simply add the update to the other table within whatever functions you are using.