How to customize exporting query result (PHP mysql) to excel file? - php

I have some questions about customizing export result(excel) with php. The idea is i want to export my query of my raw data (mysql table) to excel file but with some customization in the result.
for example i want to have result which is summary of the table like below table:
The 3rd column until 7th column is named based on the last 5 days of my report date.
My idea is:
1. create temporary table using format as the result table i want to generate
2. Insert the table with my raw data.
3. Delete those table.
Is that efective?or is there any better idea?

You can always use a view. Which is essentially a select statement with your data in there, and which will be updated whenever your tables are updated. Then you can just do a 'select * from view_name' and export that into your excel.

Depending on the size of the data, there is no need to think about performance.
Edit the data before
You can have a temp table. Depending on the data, this is very fast if you can select and insert the data based on indexes. Then you make a SELECT * from tmp_table; and you have all your data
Edit the data after
You can just join over the different tables, get the data and then loop (read as foreach) over the result array and change the data and export it afterwards

Related

How to export mysql database to excel with extra column

i need to export my database to excel. But i have two tabel (users,users document) i need to mix them when exporting and add extra column dynamicly? is there any way
I have watched many videos but those are single database and can't add column
You can use SELECT INTO outfile which will write the output into the file and in this case you want to write into CSV. Write down your JOIN queries with some tables, after that you can send the result into the file. You must define your column in the queries, otherwise if you want to have a dynamic column, you must write a function to accommodate that.

better way to mass import unique contacts into sql (php, mysql)

I need to import a very large contact list (name & email in csv format, PHP -> MySQL). I want to skip existing email. My current method is very slow in a production DB, with a lot of data.
Assuming 100 contacts (may be 10,000 contacts)
Original Steps
got the input data
check each contact in the table for existing email
100 select
mass insert in to the table
insert into value (), (), ()
1 insert
This is slow.
I want to improve the process and time.
I have thought of 2 ways.
Method 1
create a max_addressbook_temp (same structure as max_addressbook) for temporary space
clear/delete all records for the user in max_addressbook_temp
insert all records in max_addressbook_temp
create a list of duplicated record (for front end)
insert unique records from max_addressbook_temp into max_addressbook
advantage
can get a list of duplicated records to display in front end
very fast - want to import 100 record, always need only 2 sql calls: 1 insert into values and 1 insert into select
disadvantage
need a seperate table
Method 2
create unqiue index (book_user_name_id, book_email)
for each record, use insert ignore into ... (this will ignore duplicated book_user_name_id, book_email)
advantage
less code
disadvantage
can't display the contacts that are not imported
slower, want to import 100 records, need to call 100 insert
Any feedback? How are the most common & efficient way to importing a lot of addresses into DB?
=====
Here is more detail for method 1. Do you think it is a good idea?
There are 4 steps.
clear the temp data for the user
insert the import data, not checking for duplicated
selet the duplicated data for display or count
insert data that are not duplicated
// clear the temp data for the user
delete max_addressbook_temp where book_user_id =
// insert the import data, not checking for duplicated
insert into max_addressbook_temp values (), (), ()....
// selet the duplicated data for display or count
select * from max_addressbook_temp t1, max_addressbook t2
where t1.book_user_id = t2.book_user_id
and t1.book_email = t2.book_email
// insert data that are not duplicated
insert into max_addressbook t1
select * from max_addressbook_temp t2
where t1.book_user_id = t2.book_user_id
and t1.book_email <> t2.book_email
Q: Wny not use mySQL BULK INSERT?
EXAMPLE:
LOAD DATA INFILE 'C:\MyTextFile'
INTO TABLE myDatabase.MyTable
FIELDS TERMINATED BY ','
ADDENDUM:
It sounds like you're actually asking two, separate, questions:
Q1: How do I read a .csv file into a mySQL database?
A: I'd urge you to consider LOAD DATA INFILE
Q2: How do I "diff" the data in the .csv vs. the data already in mySQL (either intersection of rows in both; or the rows in one, but not the other)?
A: There is no "efficient" method. Any way you do it, you're probably going to be doing a full-table scan.
I would suggest the following:
Load your .csv data into a temp table
Do an INTERSECT of the two tables:
SELECT tableA.id
FROM tableA
WHERE tableA.id IN (SELECT id FROM tableB);
Save the results of your "intersect" query
Load the .csv data into your actual able

Import data from csv field into a database field using a lookup table

I wish to import data from a csv spreadsheet into an empty database field named "parishname" that normally uses a lookup table to add data using the query -
SELECT "Parish"."parid","Parish"."parishname" FROM "Parish" ORDER BY 2.
Can someone give me the code required to amend the query to allow data in the csv field to bypass the query. I have little or no MYSQL knowledge and am using Appgini software to build the database.
SELECT Parish,parid,Parish,parishname FROM Parish ORDER BY 2.
if not work then tell me more and proper description of your question

Big Data : Handling SQL Insert/Update or Merge best line by line or by CSV?

So basically I have a bunch of 1 Gig data files (compressed) with just text files containing JSON data with timestamps and other stuff.
I will be using PHP code to insert this data into MYSQL database.
I will not be able to store these text files in memory! Therefor I have to process each data-file line by line. To do this I am using stream_get_line().
Some of the data contained will be updates, some will be inserts.
Question
Would it be faster to use Insert / Select / Update statements, or create a CSV file and import it that way?
Create a file thats a bulk operation and then execute it from sql?
I need to basically insert data with a primary key that doesnt exist, and update fields on data if the primary key does exist. But I will be doing this in LARGE Quantities.
Performance is always and issue.
Update
The table has 22,000 Columns, and only say 10-20 of them do not contain 0.
I would load all of the data to a temporary table and let mysql do the heavy lifting.
create the temporary table by doing create table temp_table as select * from live_table where 1=0;
Read the file and create a data product that is compatible for loading with load data infile.
Load the data into the temporary table and add an index for your primary key
Next Isolate you updates by doing a inner query between the live table and the temporary table. walk through and do your updates.
remove all of your updates from the temporary (again using an inner join between it and the live table).
process all of the inserts with a simple insert into live_table as select * from temp_table.
drop the temporary table, go home and have a frosty beverage.
This may be over simplified for your use case but with a little tweaking it should work a treat.

Comparing data to table in the database

I receive raw data in CSVs, and upload it to a table in a MySQL database (upon which my website functions). I want to compare a newer CSV to the data I uploaded from an older CSV, and I want to see the differences between the two (basically I want to diff the raw data with the table).
I have PHP, MySQL, and my desktop apps (e.g. Excel) at my disposal. What's the best way to go about this? Possible ways I can think of:
Inserting the newer data into a
Table_Copy, then somehow diffing the
two tables in mysql.
Somehow querying the database in
comparison to the rawdata without
having to upload it.
Downloading the data from the
database into raw CSV format, and
then comparing the two raw CSV's using a desktop program
Why don't you use the where clause to pull only the data that is new? For instance
select * from table where dateadded > '1-1-2011 18:18'
This depends on your table having a dateadded column and populating that with the date and time the data is added.
diff <(mysqldump test old_csv --skip-extended-insert) <(mysqldump test new_csv --skip-extended-insert) --side-by-side --suppress-common-lines --width=690 | more
You can use the following approaches
1) Database Table comparison - create a copy of the table and then compare data.
You can use propriety tools to do it easily (Eg : EMS Data comparer).
You can also write some simple queries to achieve this (Eg : select id from table_copy not in (select id in table) )
2) Use a file comparer like winmerge
Take the dump of both the tables with exact method, and them compare it.
I use both the approaches depending on my data size. For smaller data 2nd approach is good.

Categories