I have already indexed a database (SQL) with a single table which is in sync with its Elasticsearch index. Now I want to index a database with multiple normalized tables. So, how should I index those tables? Should I write multiple JOIN queries in my logstash file during indexing the database tables, or should I index each table one by one and perform multiple index search? But for second way, I do not know how to form elasticsearch query for the relevant SQL queries. I am new to Elasticsearch. So any guidance for the problem would be appreciated. Here I am also attaching the schema of the database. One more thing, I am using PHP client for searching and displaying data.
First of all, everything will depend on how you want to build your indexes in elasticsearch, that is, if you want an index for each table or an index for several tables.
My advice is:
Create a trigger in the database to audit every change (insert, update, delete) and store it in a news table along with the action and a state.
Create a view for each type of novelty or table, this view will depend on how you want to index everything.
Use jdbc to call the views says the state is equal to pending (raw).
Use filters to normalize your data and adjust it to your elasticsearch structure.
Use JDBC output to update the database by setting the newness to processed to prevent it from appearing in the query.
In addition to these points, my recommendation is that you have those tables in a single index, for example, employee, where you can create different nested objects for each entity in the database, such as department, etc. where you could add the code tags and description of each one. You can check if it has not been clear 😀
Related
I have a MySQL database that is becoming really large. I can feel the site becoming slower because of this.
Now, on a lot of pages I only need a certain part of the data. For example, I store information about users every 5 minutes for history purposes. But on one page I only need the information that is the newest (not the whole history of data). I achieve this by a simple MAX(date) in my query.
Now I'm wondering if it wouldn't be better to make a separate table that just stores the latest data so that the query doesn't have to search for the latest data from a specific user between millions of rows but instead just has a table with only the latest data from every user.
The con here would be that I have to run 2 queries to insert the latest history in my database every 5 minutes, i.e. insert the new data in the history table and update the data in the latest history table.
The pro would be that MySQL has a lot less data to go through.
What are common ways to handle this kind of issue?
There are a number of ways to handle slow queries in large tables. The three most basic ways are:
1: Use indexes, and use them correctly. It is important to avoid table scans on large tables; this is almost always your most significant performance hit with single queries.
For example, if you're querying something like: select max(active_date) from activity where user_id=?, then create an index on the activity table for the user_id column. You can have multiple columns in an index, and multiple indexes on a table.
CREATE INDEX idx_user ON activity (user_id)
2: Use summary/"cache" tables. This is what you have suggested. In your case, you could apply an insert trigger to your activity table, which will update the your summary table whenever a new row gets inserted. This will mean that you won't need your code to execute two queries. For example:
CREATE TRIGGER update_summary
AFTER INSERT ON activity
FOR EACH ROW
UPDATE activity_summary SET last_active_date=new.active_date WHERE user_id=new.user_id
You can change that to check if a row exists for the user already and do an insert if it is their first activity. Or you can insert a row into the summary table when a user registers...Or whatever.
3: Review the query! Use MySQL's EXPLAIN command to grab a query plan to see what the optimizer does with your query. Use it to ensure that the optimizer is avoiding table scans on large tables (and either create or force an index if necesary).
I have a table with a lot of records (could be more than 500 000 or 1 000 000).
I want to update some common columns with the same field name in all tables throughout the database.
I know the traditional way to write separate queries to individual tables but not one query to update all records of all tables.
What is the most efficient way to do this in SQL, without using some dialect-specific features, so it works everywhere (Oracle, MSSQL, MySQL, Postgres etc.)?
ADDITIONAL INFO: There are no calculated fields. There are indexes. Used generated SQL statements that update the table row by row.
(This sounds like the classic case for normalizing that 'column'.)
Anyway... No. There is no single query to locate that column across all tables, then perform an UPDATE on each of the tables.
In MySQL, you can use the table information_schema.COLUMNS to locate all the tables containing a particular named column. With such a SELECT, you can generate (using CONCAT(), etc) the desired UPDATE statements. But then, you need to manually run them (via copy and paste).
Granted, you could probably write a Stored Procedure to wrap that into a single call, but that is too risky. What if some other table has the same column name, but should not be updated?
Example of building ALTERs to change tables' Engines: http://mysql.rjweb.org/doc.php/myisam2innodb#generating_alters
Example of using an SP to "pivot" rows to columns, complete with executing the generated code: http://mysql.rjweb.org/doc.php/pivot
As for common code across multiple vendors -- forget it! Virtually every syntax needs some amount of tweaking.
I'm developing software for conducting online surveys. When a lot of users are filling in a survey simultaneously, I'm experiencing trouble handling the high database write load. My current table (MySQL, InnoDB) for storing survey data has the following columns: dataID, userID, item_1 .. item_n. The item_* columns have different data types corresponding to the type of data acquired with the specific items. Most item columns are TINYINT(1), but there are also some TEXT item columns. Large surveys can have more than a hundred items, leading to a table with more than a hundred columns. The users answers around 20 items in one http post and the corresponding row has to be updated accordingly. The user may skip a lot of items, leading to a lot of NULL values in the row.
I'm considering the following solution to my write load problem. Instead of having a single table with many columns, I set up several tables corresponding to the used data types, e.g.: data_tinyint_1, data_smallint_6, data_text. Each of these tables would have only the following columns: userID, itemID, value (the value column has the data type corresponding to its table). For one http post with e.g. 20 items, I then might have to create 19 rows in data_tinyint_1 and one row in data_text (instead of updating one large row with many columns). However, for every item, I need to determine its data type (via two table joins) so I know in which table to create the new row. My zend framework based application code will get more complicated with this approach.
My questions:
Will my solution be better for heavy write load?
Do you have a better solution?
Since you're getting to a point of abstracting this schema to mimic actual datatypes, it might stand to reason that you should simply create new table sets per-survey instead. Benefit will be that the locking will lessen and you could isolate heavy loads to outside machines, if the load becomes unbearable.
The single-survey database structure then can more accurately reflect your real world conditions and data input handlers. It ought to make your abstraction headaches go away.
There's nothing wrong with creating tables on the fly. In some configurations, soft sharding is preferable.
This looks like obvious solution would be to use document database for fast writes and then bulk-insert answers to MySQL asynchronously using cron or something like that. You can create view in the document database for quick statistics, but allow filtering and other complicated stuff only in MySQ if you're not a fan of document DBMSs.
quick question.
In my user database I have 5 separate tables all containing different information. 4 tables are connected by foreign key to the primary key of the first table.
I am wanting to trigger row inserts on the other 4 tables when I do an insert on the first (primary). I thought that with ON UPDATE CASCADE would do this for me but after trying it I realised it did not...I know clue is in the name ON UPDATE!!!!!
I also tried and failed at multiple triggers on the same table but found this was not possible either.
What I am planning on doing is putting a trigger on the first to INSERT on the second and then putting a trigger on the second to insert on the third......etc
Would just like to know if this is a wise thing to do or not or if I am missing a better and simpler way of doing this.
Any help/advice much appreciated.
Based on the given information, it "feels" as if there might be a flaw in the database design if each of the child tables requires a row for every single row in the parent table. There is a reason that "ON INSERT CASCADE" does not exist; it is typically not considered meaningful.
The first thought that comes to mind is that the child tables should actually be part of the parent table; it sounds as if there is a one-to-one relationship. It still may make sense to have separate tables from an organizational standpoint (and size of records), but it is something to think about.
If there is not a one-to-one relationship, then the ability to add meaningful data beyond default values to the child tables would imply there might be a bit more normalization of data required. If the only values to be added are NULLs, then one could maybe argue that there is no real point in having the record because a LEFT JOIN could produce the same results without that record.
Having said all that, if it is required, I would think that it would be better to have a single trigger on the parent table add all the records to the child tables rather than chain them in several triggers. That way the logic would be contained in a single location.
Not understanding your structure (the information you need in each of these tables is pertinent to correctly answer), I can only guess that a trigger might not be what you want to do this. If your tables have other fields beyond what is in table 1 and they do not have default values, how will you get the value for those other fields inthe trigger? Personally I would use a stored proc to insert to table1 and get the id value back from the insert and then insert to the other tables with the additonal information needed and put it all in a transaction so that if one insert fails all are rolled back.
I'm working on a basic php/mysql CMS and have a few questions regarding performance.
When viewing a blog page (or other sortable data) from the front-end, I want to allow a simple 'sort' variable to be added to the querystring, allowing posts to be sorted by any column. Obviously I can't accept anything from the querystring, and need to make sure the column exists on the table.
At the moment I'm using
SHOW TABLES;
to get a list of all of the tables in the database, then looping the array of table names and performing
SHOW COLUMNS;
on each.
My worry is that my CMS might take a performance hit here. I thought about using a static array of the table names but need to keep this flexible as I'm implementing a plugin system.
Does anybody have any suggestions on how I can keep this more concise?
Thankyou
If you using mysql 5+ then you'll find database information_schema usefull for your task. In this database you can access information of tables, columns, references by simple SQL queries. For example you can find if there is specific column at the table:
SELECT count(*) from COLUMNS
WHERE
TABLE_SCHEMA='your_database_name' AND
TABLE_NAME='your_table' AND
COLUMN_NAME='your_column';
Here is list of tables with specific column exists:
SELECT TABLE_SCHEMA, TABLE_NAME from COLUMNS WHERE COLUMN_NAME='your_column';
Since you're currently hitting the db twice before you do your actual query, you might want to consider just wrapping the actual query in a try{} block. Then if the query works you've only done one operation instead of 3. And if the query fails, you've still only wasted one query instead of potentially two.
The important caveat (as usual!) is that any user input be cleaned before doing this.
You could query the table up front and store the columns in a cache layer (i.e. memcache or APC). You could then set the expire time on the file to infinite and only delete and re-create the cache file when a plugin has been newly added, updated, etc.
I guess the best bet is to put all that stuff ur getting from Show tables etc in a file already and just include it, instead of running that every time. Or implement some sort of caching if the project is still in development and u think the fields will change.