hi
so this is the setup: i need to update some prices from a csv file called pricelist.csv. the database table is called products and there is a column called product_id, which contains the product ids which can also be found in the first column of the csv file and the prices and lastly i need are located in the 7th column of the csv file. i need to write these to the price column of my database.
i have tried my best to come up with the code, but it just seems too much for my skill level. here is what i made:
<?php
include("admin/include/db.php");
$res=mysql_query("select * from products");
$row = 1;
$mycsvfile = array(); //define the main array.
if (($handle = fopen("pricelist.csv", "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ";")) !== FALSE)
{
$num = count($data);
$row++;
$mycsvfile[] = $data;
}
fclose($handle);
}
$row['product_id'] = $mycsvfile[$which_row][1] //trying to find the row in the csv
$mycsvfile[$which_row][7] = $price; //should get price, but previous line does not work
while($row=mysql_fetch_array($res))
{
mysql_query("update products set price='".$price."', isavailable='1' where id='".$row['id']."'");
}
?>
any sort of help is welcome! thanks
I think you are looking for this (but I didn't test it):
<?php
include("admin/include/db.php");
if( ( $handle = fopen("pricelist.csv", "r") ) !== FALSE )
{
while( ( $r = fgetcsv( $handle, 1000, ";") ) !== FALSE )
{
mysql_query('UPDATE products SET "price"="'.$r[6].'", "isavailable"="1" where "id"="'.$r[0].'"');
}
}
Disclaimer: Yes I know I didn't sanitize the data, but I don't feel like working with outdated mysql functions.
You can use file() to read the CSV file into an array. Then use str_getcsv() to read each item from the file array and turn it into an array reprensnting a row from the CSV file. Then you pull the data from that array into an update query and run it.
Like this:
$id = 0;
$price = 6;
$csv_file = file('pricelist.csv');
foreach($csv_file as $row)
{
$data = str_getcsv($row);
$query = "UPDATE products SET price = {$data[$price]}, isavailable='1' WHERE `id` = {$data[$id]}";
//run the query
}
Related
What I am trying to do is Upload a CSV file with Php. The first line is the Column names and below that the data (of course). Each column name can change depends on the end user uploads. So the main column names we need can change spots (A1 or B1 etc...) So lets say the column I need is B1 and I need to get all the data in B. Not sure on how to go by it. So far this is what I have. Any ideas?
ini_set("allow_url_fopen", 1);
$handle = fopen($_FILES['fileToUpload']['tmp_name'], 'r') or die ('cannot open the file');
while(!feof($handle)) {
$data[] = fgetcsv($handle);
}
var_dump($data);
fclose($handle);
UPDATE:
I am importing this file from .CSV to PHP
I need to search for column header that starts with “SKU” and then “COST”
From there once those are found then I want the whole column… B, E. But those column letters can change, depends on how it is being exported by the end user. I do not need the rows, just columns.
Once the file is uploaded into the server, use something like the following code to parse it and actually use it as an array[];
Code:
$filename = "upload/sample.csv";
if (($handle = fopen($filename, 'r')) !== FALSE){
while (($row = fgetcsv($handle, 1000, ",")) !== FALSE){
print_r($row);
}
}
That's one way of doing it, you could also read more about it here.
If you want the value of a specific column for each row then you need to loop through the results and pick it out. It looks like you are getting an array of arrays so...(EDITED to get the column based on the header name):
$header = $data[0];
unset($data[0]); // delete the header row so those values don't show in results
$sku_index = '';
$cost_index = '';
// get the index of the desired columns by name
for($i=0; $i < count($header); $i++) {
if($header[$i] == 'SKU') {
$sku_index = $i;
}
if($header[$i] == 'COST') {
$cost_index = $i;
}
}
// loop through each row and grab the values for the desired columns
foreach($data as $row) {
echo $row[$sku_index];
echo $row[$cost_index];
}
Should get what you want.
I am trying to read a certain data in my csv file and transfer it to an array. What I want is to get all the data of a certain column but I want to start on a certain row (let say for example, row 5), is there a possible way to do it? What I have now only gets all the data in a specific column, want to start it in row 5 but can't think any way to do it. Hope you guys can help me out. Thanks!
<?php
//this is column C
$col = 2;
// open file
$file = fopen("example.csv","r");
while(! feof($file))
{
echo fgetcsv($file)[$col];
}
// close connection
fclose($file);
?>
Yes you can define some flag to count the row. Have a look on below solution. It will start printing from 5th row, also you can accesscolum by its index. For eg. for second column you can use $row[1]
$start_row = 5; //define start row
$i = 1; //define row count flag
$file = fopen("myfile.csv", "r");
while (($row = fgetcsv($file)) !== FALSE) {
if($i >= $start_row) {
print_r($row);
//do your stuff
}
$i++;
}
// close file
fclose($file);
You have no guarantee that your file exists or you can read it or ....
Similar to fgets() except that fgetcsv() parses the line it reads for fields in CSV format and returns an array containing the fields read. PHP Manual
//this is column C
$col = 2;
// open file
$file = fopen("example.csv","r");
if (!$file) {
// log your error ....
}
else {
while( ($row = fgetcsv($file)) !== FALSE){
if (isset($row[$col])) // field doesn't exist ...
else print_r ($row[$col]);
}
}
// close file
fclose($file);
?>
Depending on the quality and volume of your incoming data, you may wish to use iterated conditions to build your output array or you may prefer to dump all of the csv data into a master array and then filter it to the desired structure.
To clarify the numeracy in my snippets, the 5th row of data with be located at index [4]. The same indexing is used for column targeting -- the 4th column is at index [3].
A functional approach (assumes no newlines in values and is not set up with any extra csv parsing flags):
$starting_index = 4;
$target_column = 3;
var_export(
array_column(
array_slice(
array_map(
'str_getcsv',
file('example.csv')
),
$starting_index
),
$target_column
)
);
A language construct approach with leading row exclusions based on a decrementing counter.
$disregard_rows = 4;
$target_column = 3;
$file = fopen("example.csv", "r");
while (($row = fgetcsv($file)) !== false) {
if ($disregard_rows) {
--$disregard_rows;
} else {
$column_data[] = $row[$target_column];
}
}
var_export($column_data);
I have a csv file with 26,000 rows which i'm looping through rows and updating records (sometimes multiple) in a table with 250,000+ records. At the moment, its taken ages! I was wondering if there was an alternative way to do this quicker (in code or mysql/etc)
$row = 1;
if (($handle = fopen("zip-codes-database-DELUXE-BUSINESS2.csv", "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
if($row> 1){
# GET THE AREACODE FROM data 20
# name is: 21
$insert = "UPDATE ".TBLPREFIX."cities SET data = '".escape(serialize($data))."' WHERE area_code = ".$data[20]." AND title = '".trim(strtoupper($data[21]))."'";
mysql_query($insert) or die(mysql_error());
}
$row++;
}
fclose($handle);
}
Based on nothing I might try:
get the csv into a table via cmd line or 'load data infile'
update the records into a temp table using a 'insert ... select' where you join the old and new
move the temp table back onto the original (delete/rename)
Seems like it would be faster.. if a bit kludgy.
I have a csv file in a format resembling the following. There are
no column heads in the actual file - They are shown here for clarity.
id|user|date|description
0123456789|115|2011-10-12:14:29|bar rafael
0123456789|110|2012-01-10:01:34|bar rafael
0123456902|120|2011-01-10:14:55|foo fighter
0123456902|152|2012-01-05:07:17|foo fighter
0123456902|131|2011-11-21:19:48|foo fighter
For each ID, I need to keep the most recent record only, and write
the results back to the file.
The result should be:
0123456789|110|2012-01-10:01:34|bar rafael
0123456902|152|2012-01-05:07:17|foo fighter
I have looked at the array functions and don't see anything that
will do this without some kind of nested loop.
Is there a better way?
const F_ID = 0;
const F_USER = 1;
const F_DATE = 2;
const F_DESCRIPTION = 3;
$array = array();
if (($handle = fopen('test.csv', 'r')) !== FALSE) {
while (($data = fgetcsv($handle, 1000, '|')) !== FALSE) {
if (count($data) != 4)
continue; //skip lines that have a different number of cells
if (!array_key_exists($data[F_ID], $array)
|| strtotime($data[F_DATE]) > strtotime($array[$data[F_ID]][F_DATE]))
$array[$data[F_ID]] = $data;
}
}
You'll have, in $array, what you want. You can write it using fputcsv.
NOTE. I didn't test this code, it's meant to provide a basic idea of how this would work.
The idea is to store the rows you want into $array, using the first value (ID) as the key. This way, on each line you read, you can check if you already have a record with that ID, and only replace it if the date is more recent.
Each time you encounter a new id, put it in your $out array. If the id already exists, overwrite it if the value is newer. Something like:
$in_array = file('myfile.txt');
$out_array = array();
$fields = array('id', 'user', 'date', 'description');
foreach($in_array as $line) {
$row = array_combine($fields, explode('|', $line) );
//New id? Just add it.
if ( !isset($out_array[ $row['id'] ]) ) {
$out_array[ $row['id'] ] = $row;
}
//Existing id? Overwrite if newer.
else if (strcmp( $row['date'], $out_array[ $row['id'] ]['date'] ) > 0 ) {
$out_array[ $row['id'] ] = $row;
}
//Otherwise ignore
}
//$out_array now has the newest row for each id, keyed by id.
I need a PHP script which will update existing MySQL table getting data from a CSV. Table field hasweb need to be updated comparing a field consultant_id.
So MySql query should be
UPDATE user_data
SET hasweb="something"
WHERE consultant_id = "something";
Please help me to write a PHP script which can execute this query as many times as needed, depending upon CSV data.
I have written little php scripts to accomplish this many times and there are many ways go to about it:
The best according to my experience is to use CSV functions provided by PHP, take a look at fgetcsv(), because manually opening file and reading it line by line and parsing can cause complications.
Now you just loop through all the rows in csv and prepare query dynamically and execute it, for example (assuming that column 0 has IDs and column 1 has "hasweb")
<?php
if (($handle = fopen("input.csv", "r")) !== FALSE)
{
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE)
{
mysql_query(UPDATE user_data SET hasweb="{$data[1]}" WHERE consultant_id = "{$data[0]}");
}
fclose($handle);
}
?>
Hope that helps. If still stuck, Please ask me :)
You can use php function fgetcsv.
you can get all data from CSV file into php by using this function.
for example,
$row = 1;
if (($handle = fopen("your_file.csv", "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
$num = count($data);
echo "<p> $num fields in line $row: <br /></p>\n";
$row++;
for ($c=0; $c < $num; $c++) {
$SQL = 'UPDATE table_name SET col1=val1 WHERE con1';
execute($SQL);
}
}
fclose($handle);
}
see this function: http://www.php.net/manual/zh/function.fgetcsv.php