I have a question to which I have been unable to find the answer.
I can create an extra column in a PHP recordset by using an existing column and duplicating it:
SELECT
id_tst,
name_tst,
age_tst,
price_tst,
price_tst AS newprice_tst
FROM test_tst
From what I can work out the AS will only duplicate an existing colulmn or rename a column rs.
I want to add two extra columns to a table, but with no values.
I know a lot of people will say whats the point in that; it's pointless to have 2 columns with no data.
The reason is I am doing a price updating module for a CMS system, where the user can download a .csv file containing prices, modify the prices in a spreadsheet then re-upload the CSV to update he prices.
the two extra columns would be to hold the new prices keeping the old so a roll back from the CSV file could be performed if nessecary.
I could just get the client to add the two new colulmns into the spreadsheet, but would prefer to have the exported CSV with the columns already in place.
Is it possible to create blank columns when creaing an rs?
You can create empty "dummy" columns by aliasing a blank string:
SELECT '' AS emptyColumn, column1, column2 FROM table
This will produce another column in your query with all blank values.
Related
I have built a database with 6 tables, roughly 175 fields. About 130 of these fields are to be populated from data on a CSV.
Currently, a handheld device exports this CSV and it is read into a spreadsheet but it's moving to a database. So, on the front end when someone uploads a CSV, it will populate the database.
Question:
I'm trying to figure out the best way to break that CSV up line by line and put certain info into certain tables. Is that possible? If so how?
I was hoping I could query to create a header for each CSV field and map it to database fields (Since the CSV will always be in the same order).
I don't think of it as a RBAR problem. If you load the file as-is into a single staging table, you can then run something like the following for each table:
INSERT INTO destTable (col1, col2)
SELECT col1, col2
FROM StageTable
WHERE col3 = 'criteria'
That way, you keep everything set-based. Of course it depends on the number of records involved, but processing data row by row and TSQL are generally not a good fit. SSIS does a much better job of that than TSQL.
tag it by associative array in columns example
id,name,color
1,jo,red
2,ma,blue
3,j,yellow
get the first line in one array, so just compare by index the value in a loop
I've been having an issue for days now and have hit a brick wall. Firstly, as the title suggests I have been working to import CSV files to a SQL database.
To be more specific, this is done through PHP scripts on the server and through MySQL into the DB.
I currently have around 30 CSV files (this number is projected to increase) which are updated daily, then a cron script is triggered once per day to update the new data. It loads the file through LOAD DATA INFILE. All of this works perfectly.
The problem is:
Each CSV file contains a different column count. The column count ranges between 50-56 columns. The data I am storing in this collective database only requires the first 8 columns. I already know how to skip individual columns using #dummy thanks to the following Q&A: How to skip columns in CSV file when importing into MySQL table using LOAD DATA INFILE?
However, as the dummy count will not always be the same due to the different column counts, I was wondering if there was a way to get the data from columns 1-8 then ignore all after regardless of column count?
A rather rough patch up would be to first read the beginning line in php, to count columns by commas. Then knowing the amount, subtract 8 and generate the sql command now knowing how many columns you need to ignore.
Just include the eight columns to populate and it will us the first eight from the CSV row:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (c1, c2, c3, c4, c5, c6, c7, c8)
I have a database table with 6 columns of 365 rows of data. I need to swap the 3rd column (named 'Date_line') with new data while leaving the other 5 columns in place, without exporting the whole table, but can't get phpMyAdmin to work with me.
Normally I'd just truncate the table and upload a revised CSV file for the whole table, but here's the catch: I have to update 232 data tables with this same exact column of data (the column data is common to all 232 tables). To do all 232 individually would mean exporting each table, opening it in Excel, swapping the old column for the new one, converting to CSV then re-uploading. It would be a lot easier if I could just import a single column CSV to overwrite the old one. But I don't know how.
I'd like to do this using the phpMyAdmin interface... I'm not much experienced in assigning scripts. Is there a way?
Hello this is my first time i post but hopefully i won't mess up to much.
Basically i'm trying to to copy two tables into a new table, the data in table 2 and 3 are temp data that i update with two csv files. It's just basic data that share the same ID so thats the Primary Key and i want these to be combined into a new table. This is supposed to be done just once a day handling about 2000 lines Below follows a better description of what i'm looking for.
3 tables, Core, temp_data1, temp_data2
temp_data1 has id, name, product
temp_data2 has id, description
id is a unique since it's the product_nr of the product
First copy the data from temp_data1 to Core. Insert new line if the product does not exist, if it do exist it should update the row with the information
Next update Core with the description where id=id and do not insert if id do not exist (it should not exist)
I'm looking for something that can be done in one push of a button, first i upload the csv file into the two different databases (two different files) next i push a button to merge the two tables to the Core one. I know you can do this right away with the two csv files and skip the two tables but i feel like that is so over my head it's not even funny.
I can handle programming php it's all the mysql stuff that's messing with my head.
Hopefully you guys can help me and in return i will help out any other place i can.
Thanks in advance.
I'm not sure I understand it correctly, but this can be done using only sql script, using INSERT INTO...SELECT...ON DUPLICATE KEY UPDATE... - see http://dev.mysql.com/doc/refman/5.6/en/insert-select.html
I am setting up an uploader (using php) for my client where they can select a CSV (in a pre-determined format) on their machine to upload. The CSV will likely have 4000-5000 rows. Php will process the file by reading each line of the CSV and inserting it directly into the DB table. That part is easy.
However, ideally before appending this data to the database table, I'd like to review 3 of the columns (A, B, and C) and check to see if I already have a matching combo of those 3 fields in the table AND IF SO I would rather UPDATE that row rather than appending. If I DO NOT have a matching combo of those 3 columns I want to go ahead and INSERT the row, appending the data to the table.
My first thought is that I could make columns A, B, and C a unique index in my table and then just INSERT every row, detect a 'failed' INSERT (due to the restriction of my unique index) somehow and then make the update. Seems that this method could be more efficient than having to make a separate SELECT query for each row just to see if I have a matching combo already in my table.
A third approach may be to simply append EVERYTHING, using no MySQL unique index and then only grab the latest unique combo when the client later queries that table. However I am trying to avoid having a ton of useless data in that table.
Thoughts on best practices or clever approaches?
If you make the 3 columns the unique id, you can do an INSERT with ON DUPLICATE KEY.
INSERT INTO table (a,b,c,d,e,f) VALUES (1,2,3,5,6,7)
ON DUPLICATE KEY UPDATE d=5,e=6,f=7;
You can read more about this handy technique here in the MySQL manual.
If you add a unique index on the ( A, B, C ) columns, then you can use REPLACE to do this in one statement:
REPLACE works exactly like INSERT,
except that if an old row in the table
has the same value as a new row for a
PRIMARY KEY or a UNIQUE index, the old
row is deleted before the new row is
inserted...