Get everything from a Database - MySQL - php

I was wondering if there is a way to get everything (all records) from a database? Then the user will have the option to save that file as an excel spreadsheet.
I was looking at DTS (data transformation service) is this the same thing?
Is there a specific query that can be sent through PHP to the database, would that be too much load on it?
I did some volume analysis and figured that the largest the database will ever get will be no more than 40mb.
So Ideally what I want to achieve is this.
Query "get everything from database"
My PHP "recieves query result"
My PHP "transforms it into an excel file"
Prompt user to save excel file
Is this possible?
Thanks

You can only SELECT everything from a table; iterate a list of all tables (use SHOW TABLES of information_schema.tables) and run SELECT * FROM ... for each.

Databases are not spreadsheets. You can get a CSV representation of a single table containing no binary data (which Excel will open) using a SELECT ... INTO OUTFILE query.
http://dev.mysql.com/doc/refman/5.1/en/select.html

<?php
$output = array();
$tables = array('table1','table2','table3');
foreach($tables as $table) {
$sql = "SELECT * from " . $table;
$result = mysql_query($sql);
array_push($output, $result);
}
// You now have an array of database objects for each table
// to do with as you will.
?>

Related

Omit a column while writing mysql data into a csv file using fputcsv

My code for fetching mysql rows and write to csv file using fputscv is working fine. But I want to omit first column while writing to the csv file. Clearly telling, all the fetched values are different points of a graph and I would be directly importing that generated csv file for a graph generating code. The id (which is not a graph point) will be included in csv file if I use a query like,
$sql = "SELECT * FROM mybase WHERE id='$id'";
$qry = $dbo->prepare($sql);
$qry->execute();
$row = $qry->fetch(PDO::FETCH_ASSOC);
fputcsv($data, $row);
Anyone knows the best method to eliminate the ID before writing into the csv file.?
I saw an obvious and simple result here. Unfortunately, I have too many columns and it is difficult to specify each column in sql query. Thanks..
Just UNSET it like
unset($row['ID']);
fputcsv($data, $row);
Orelse You can fetch all the columns except ID in your query itself like
$sql = "SELECT other_than_ID_column FROM mybase WHERE id='$id'";

PHP/mySQL get_file_contents and update the column

I have a mySQL db table. One of the table columns contains URLs which point to different xml files on a remote server.
My goal is to read each URLs info and write the xml content into another column on the same record (line) respectively.
In my PHP code, I am able to get the URL correctly from mySQL database and I am able to get the XML content on remote server into a variable correctly.
But the issue is while I write the content to my table line by line. Some XML columns got update correctly and some XML columns are empty.
I am pretty sure each time the variable got content correctly because I am able to print out each individual content on screen.
Why are some content updating the column and some don't. All the XML strings have the same format. If I copied that content and updated the mysql table manually, it successfully wrote into the table.
At beginning I thought it was time issue so I add enough sleep time for my PHP code. it does't help. then I suspected my db datatype, so I changed the XML
column data type from VCHAR to TEXT and even LONGTEXT. it does't help either. Does any one have a clue?
part of my php code below...
$result = mysqli_query($con,"SELECT url_txt FROM mytable ");
//work with result line by line:
while($row = mysqli_fetch_array($result)) {
echo $url_content = file_get_contents($row['url_txt']);
//debug line below *******************************/
echo $URL=$row[url_txt];
//debug line above********************************/
mysqli_query($con,"UPDATE mytable SET xml_info='$url_content' where url_txt = '$URL' ");
}
Maybe try converting your XML to a native array and working with it that way:
$array = json_decode(json_encode(simplexml_load_string($url_content)),TRUE);

Which method is better to load MySQL database?

So I have a script that reads a text file, organizes it into an array then uses this code to loop through the data to input into the proper columns/rows inside MySQL server:
$size = sizeof($str)/14;
$x=0;
$a=0; $b=1; $c=2; $d=3; $e=4; $f=5; $g=6; $h=7; $i=8; $j=9; $k=10; $l=11; $m=12; $n=13;
mysql_query('TRUNCATE TABLE scores');
do {
$query = "INSERT INTO scores (serverid,resetid, rank,number,countryname,land,networth,tag,gov,gdi,protection,vacation,alive,deleted)
VALUES ('$str[$a]','$str[$b]','$str[$c]','$str[$d]','$str[$e]','$str[$f]','$str[$g]','$str[$h]',
'$str[$i]','$str[$j]','$str[$k]','$str[$l]','$str[$m]','$str[$n]')";
mysql_query($query,$conn);
$a=$a+14; $b=$b+14; $c=$c+14; $d=$d+14; $e=$e+14; $f=$f+14; $g=$g+14; $h=$h+14; $i=$i+14; $j=$j+14; $k=$k+14; $l=$l+14; $m=$m+14; $n=$n+14;
$x++;
} while ($x != $size);
mysql_close($conn);
This code figures out how large the file is loops through all 13 columns until it reaches the last row in the text file. Each time it is ran it clears the DB and loads the new data (as intended).
My question is: is this a good way of doing it? Or is there a faster more clean way to do the same thing as my code above?
Could I use the LOAD DATA LOCAL INFILE '$myFile'" . " INTO TABLE ranksfeed_temp FIELDS TERMINATED BY ',' to do the same job in a more efficient manner? What are your thoughts? I'm trying to make my code more efficient and fast.
LOAD DATA would be faster and more efficient to import a character separated file like csv. LOAD DATA is optimized for importing large files into you MySQL table, whereas you are running one query per row from your textfile, which ist incredibly slow in execution.
Please pay attention to the fact that the LOCAL option is only for files which are placed on the client side of your MySQL-Server-Client Connection. Try to load the file form the machine which acts as the MySQL directly.
Disabling possible keys on your table before inserting can give you extra speed while importing. Try it with disabled keys and without to benchmark the results.

How to speed up processing a huge text file?

I have an 800mb text file with 18,990,870 lines in it (each line is a record) that I need to pick out certain records, and if there is a match write them into a database.
It is taking an age to work through them, so I wondered if there was a way to do it any quicker?
My PHP is reading a line at a time as follows:
$fp2 = fopen('download/pricing20100714/application_price','r');
if (!$fp2) {echo 'ERROR: Unable to open file.'; exit;}
while (!feof($fp2)) {
$line = stream_get_line($fp2,128,$eoldelimiter); //use 2048 if very long lines
if ($line[0] === '#') continue; //Skip lines that start with #
$field = explode ($delimiter, $line);
list($export_date, $application_id, $retail_price, $currency_code, $storefront_id ) = explode($delimiter, $line);
if ($currency_code == 'USD' and $storefront_id == '143441'){
// does application_id exist?
$application_id = mysql_real_escape_string($application_id);
$query = "SELECT * FROM jos_mt_links WHERE link_id='$application_id';";
$res = mysql_query($query);
if (mysql_num_rows($res) > 0 ) {
echo $application_id . "application id has price of " . $retail_price . "with currency of " . $currency_code. "\n";
} // end if exists in SQL
} else
{
// no, application_id doesn't exist
} // end check for currency and storefront
} // end while statement
fclose($fp2);
At a guess, the performance issue is because it issues a query for each application_id with USD and your storefront.
If space and IO aren't an issue, you might just blindly write all 19M records into a new staging DB table, add indices and then do the matching with a filter?
Don't try to invent the wheel, it's been done. Use a database to search through the file's content. You can looad that file into a staging table in your database and query your data using indexes for fast access if they add value. Most if not all databases have import/loading tools to get a file into the database relatively fast.
19M rows on DB will slow it down if DB was not designed properly. You can still use text files, if it is partitioned properly. Recreating multiple smaller files, based on certain parameters, storing in proper sorted way might work.
Anyway PHP is not the best language for file IO and processing, it is much slower than Java for this task, while plain old C would be one of the fastest for the job. PHP should be restricted to generated dynamic Web output, while core processing should be in Java/C. Ideally it should be Java/C service which generates output, and PHP using that feed to generate HTML output.
You are parsing the input line twice by doing two explodes in a row. I would start by removing the first line:
$field = explode ($delimiter, $line);
list($export_date, ...., $storefront_id ) = explode($delimiter, $line);
Also, if you are only using the query to test for a match based on your condition, don't use SELECT * use something like this:
"SELECT 1 FROM jos_mt_links WHERE link_id='$application_id';"
You could also, as Brandon Horsley suggested, buffer a set of application_id values in an array and modify your select statement to use the IN clause thereby reducing the number of queries you are performing.
Have you tried profiling the code to see where it's spending most of its time? That should always be your first step when trying to diagnose performance problems.
Preprocess with sed and/or awk ?
Databases are built and designed to cope with large amounts of data, PHP isn't. You need to re-evaluate how you are storing the data.
I would dump all the records into a database, then delete the records you don't need. Once you have done that, you can copy those records wherever you want.
As others have mentioned, the expense is likely in your database query. It might be faster to load a batch of records from the file (instead of one at a time) and perform one query to check multiple records.
For example, load 1000 records that match the USD currency and storefront at a time into an array and execute a query like:
'select link_id from jos_mt_links where link_id in (' . implode(',', application_id_array) . ')'
This will return a list of those records that are in the database. Alternatively, you could change the sql to be not in to get a list of those records that are not in the database.

How to current snapshot of MySQL Table and store it into CSV file(after creating it)?

I have large database table, approximately 5GB, now I wan to getCurrentSnapshot of Database using "Select * from MyTableName", am using PDO in PHP to interact with Database. So preparing a query and then executing it
// Execute the prepared query
$result->execute();
$resultCollection = $result->fetchAll(PDO::FETCH_ASSOC);
is not an efficient way as lots of memory is being user for storing into the associative array data which is approximately, 5GB.
My final goal is to collect data returned by Select query into an CSV file and put CSV file at an FTP Location from where Client can get it.
Other Option I thought was to do:
SELECT * INTO OUTFILE "c:/mydata.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY "\n"
FROM my_table;
But I am not sure if this would work as I have cron that initiates the complete process and we do not have an csv file, so basically for this approach,
PHP Scripts will have to create an CSV file.
Do a Select query on the database.
Store the select query result into the CSV file.
What would be the best or efficient way to do this kind of task ?
Any Suggestions !!!
You can use the php function fputcsv (see the PHP Manual) to write single lines of csv into a file. In order not to run into the memory problem, instead of fetching the whole result set at once, just select it and then iterate over the result:
$fp = fopen('file.csv', 'w');
$result->execute();
while ($row = $result->fetch(PDO::FETCH_ASSOC)) {
// and here you can simply export every row to a file:
fputcsv($fp, $row);
}
fclose($fp);

Categories