Currently i have a customer form which inserts data into multiple tables.
I am integrating with QuickBooks PHP-API (Consolibyte).
When the data is inserted following is the queuing method that proceeds after insertion in a table.
$Queue = new QuickBooks_WebConnector_Queue($dsn);
$Queue->enqueue(QUICKBOOKS_ADD_CUSTOMER, $id);
where $id is the last_insert_id() in a table.
Since there are entries happening to multiple tables, how do i update Customer records in Quickbooks by fetching data from multiple table since each table will return its own last insert id.
Since there are entries happening to multiple tables`
Presumably, you have a common customer_id field or something that is shared across all tables.
Use that customer_id value.
how do i update Customer records in Quickbooks by fetching data from multiple table
SQL supports something called JOINs which are specifically meant to allow you to "smush" two related tables together and get all the data related to your specific customer from all of the related tables.
Use a JOIN. Get all the data from the tables.
Build your qbXML request from that.
Related
I am fairly new to web development and currently working on a website using an MVC framework that can capture maintenance work conducted. I have managed to make the forms and it correctly displays any errors in filling the form and if there aren't any errors successfully inserts it into the database. What I would like to achieve is having the main table with the general details of the maintenance such as (date, time, technician, department, location, recommendations) and another table for which records what tasks were done during the maintenance such as sweeping, mopping, wiping the windows, cutting grass, etc. I have a single form that requires all the details required in both the tables to be filled. both tables will have primary keys that will be auto-increment. I would then like to simultaneously insert the data into the relevant tables only while inserting data into the tasks table I would like to have a foreign key to the main table for that particular record so it corresponds accordingly. How can I achieve this without manual input by the user if the primary key of the main table is an auto increment?
This isn't a big problem. It can't be done as a single query, but using transactions you can achieve an all-or-nothing result.
In pseudocode:
Validate data
Start a transaction
Insert data into main record
Get the last inserted ID
Insert one or more records into the child table, using the ID retrieved above
Commit the transaction (or roll back if some error occurred)
The exact mechanics vary between MySQLi and PDO, but the principle is the same.
I have a MySQL database that is becoming really large. I can feel the site becoming slower because of this.
Now, on a lot of pages I only need a certain part of the data. For example, I store information about users every 5 minutes for history purposes. But on one page I only need the information that is the newest (not the whole history of data). I achieve this by a simple MAX(date) in my query.
Now I'm wondering if it wouldn't be better to make a separate table that just stores the latest data so that the query doesn't have to search for the latest data from a specific user between millions of rows but instead just has a table with only the latest data from every user.
The con here would be that I have to run 2 queries to insert the latest history in my database every 5 minutes, i.e. insert the new data in the history table and update the data in the latest history table.
The pro would be that MySQL has a lot less data to go through.
What are common ways to handle this kind of issue?
There are a number of ways to handle slow queries in large tables. The three most basic ways are:
1: Use indexes, and use them correctly. It is important to avoid table scans on large tables; this is almost always your most significant performance hit with single queries.
For example, if you're querying something like: select max(active_date) from activity where user_id=?, then create an index on the activity table for the user_id column. You can have multiple columns in an index, and multiple indexes on a table.
CREATE INDEX idx_user ON activity (user_id)
2: Use summary/"cache" tables. This is what you have suggested. In your case, you could apply an insert trigger to your activity table, which will update the your summary table whenever a new row gets inserted. This will mean that you won't need your code to execute two queries. For example:
CREATE TRIGGER update_summary
AFTER INSERT ON activity
FOR EACH ROW
UPDATE activity_summary SET last_active_date=new.active_date WHERE user_id=new.user_id
You can change that to check if a row exists for the user already and do an insert if it is their first activity. Or you can insert a row into the summary table when a user registers...Or whatever.
3: Review the query! Use MySQL's EXPLAIN command to grab a query plan to see what the optimizer does with your query. Use it to ensure that the optimizer is avoiding table scans on large tables (and either create or force an index if necesary).
I'm making a table (with MySQL) to store some data, but i'm not sure of the way to do it properly, because of the amount of data. For example if it's adress book database.
so there is a table for users and a table for contacts. Each users can own hundreds of contacts, and there could be thousans of users. Should I add a new row for every single contact (it will make a lot of rows!), or can i just concatenate all of them in one row with the user id.
uuh, this is just an example, but in my case once contacts are INSERTED they will never be UPDATED so, no modifications, they can only be DELETED.
To go by the normal forms, you should have three tables
1) Users -> {User_id} (primary key)
2) Contacts -> {Contact_id} (primary key)
3) Users_Contacts -> {User_id, Contact_id} (Compound key)
The Junction table Users_Contacts will have one record per contact - meaning for each unique value of User_id+Contact_id, there will be one record.
However, In practice, it is not always necessary to stick to the rule book. Depending on the use case, it is often advisable to have a denormalized table. The call is yours.
There is also another option of using NoSQL with MySQL. For example, the contacts can be serialized into JSON and stored. Mysql 5.7 seem to support this data format (with some external help). See this for details.
Say for eg: If you add 3 contacts for a single user and as you mentioned you would be deleting contacts the its better to insert all three contacts, each in a new row with its user id. Because if you want to delete any one of the contact from 3 of them, then it will be easy.
If you concatenate all the contacts for an user and add them in one row could land up many issues. What in future the requirement changes and you need to make a layout all the contacts for an user with edit/delete individual contacts. So you should have one contact in each row.
You can optimize your query by indexing the columns.
Say userid#1234 has 1000 contacts in contact table where the primary key in contact table is idcontact (Indexed by default) and then in contact table another field called "iduser" which is also indexed, then the select performance over an iduser on contact table will be fast.
Ideally its the best approach using mysql database. There are examples of many apps where it maintains millions of data so it should be fine with a contact table and for each contact a new row.
I wouldn't worry about lots of rows. You have to keep in-mind the granularity of control the user would expect (deleting / adding a contact, rearranging the list based on different factors, etc). It's always better to break things out into their owns rows if they are going to be treated independently from a similar item (contacts, users, addresses, etc). Additionally, if you were to concatenate your data, re-ordering for display or removing data becomes extremely resource intensive. Where as MySQL is designed to do exactly that "on the cheap".
MySQL can easily handle millions of rows of data. If you are worried about speed, just make sure your indexes are in-place before your data collection is too big (I would venture a guess, and say you'll need to index the user ID the contact belongs to and the first/last names). Indexes are a double-edged sword, however, as they take up disk space, but allow fast querying of large data sets. So you don't want to go over-board and index everything, only what you'll be sorting/searching by.
(Why on earth will contacts never be updated?...)
How to maintain One to Many Relationship in database ?? Which is the appropriate process??
Like, I am inserting library information from a form. Library name,library description,library address fields are in text boxes. There is a group of check boxes which are representing which books are available in that library. Assume I have three table 'library','books','library_book_relation'.
In this secenerio, Which is the exact process ??Do I have to insert data into two tables (library,library_book_relation) with 2 query like 1. insert to into library.... and 2. insert to into library_book_relation.... simultaneously, Or there is any other method to do the job ??
What I'll have to do (query) when I would like to retrieve library information from database ?? Which method does software world follow ??
You need to insert your data to library table
After inserting new row, you will get the last id inserted in your library table
Insert your library books (relation) using your last id as a foreign key to library table
Don't forget to wrap all aforementioned steps inside a transaction.
You will have to enter data in both the tables one after the other.
First insert the library record.
Second insert the books and library record in the mapping table.
For retrieving you can use joins to retrieve libraries and their corresponding books.
Ex. Select * from library inner join library_books_relation on library.lib_id=library_books_relation.lib_id where lib_id=something
Or you can retrieve all the records by removing the 'where' clause.
Im new to PHP and I was wondering how I can overcome this seemingly simple problem:
I have a database with several tables. Of them 1 table is called "order_header". Order header has a field called "orderID" which is the primaryKey and is auto-incremented. OrderID is used in other tables in the database (food_table, drinks_table, merchant_info, customer_info, etc)and is unique to a particular order.
Now I insert data into the order_header using the usual INSERT statement and the order_header generates a new orderID. But now I need to retrieve the orderID I just created and use it to insert data into other tables of that database.
The question is how can I do both inserting data and retrieving the resulting orderID in one atomic method? I cannot use the mySQL query to get the last orderID because what if another thread has inserted an entry in orderID in the meanwhile.
In Java I guess one could use locks and the word #synchronized, but how would one do this in PHP?
Use mysql_insert_id straight after the query. It doesn't run another query to find the last ID