I have a query selects all from the database table and writes it to a text file. If the state is small (say max of 200k rows), the code still works and writes it to the text file. Problem arises when I have a state that has 2M rows when queried, then there's also the fact that the table has 64 columns.
Here's a part of the code:
create and open file
$file = "file2.txt";
$fOpen = fopen($file, "a"); // Open file, write and append
$qry = "SELECT * FROM tbl_two WHERE STE='48'";
$res = mysqli_query($con, $qry);
if(!$res) {
echo "No data record" . "<br/>";
exit;
}
$num_res =mysqli_num_rows($res);
for ($i=0; $i<=$num_res; $i++) {
$row = mysqli_fetch_assoc ($res);
$STATE = (trim($row['STATE'] === "") ? " " : $row['STATE']);
$CTY = (trim($row['CTY']=== "") ? " " : $row['CTY']);
$ST = (trim($row['ST']=== "") ? " " : $row['ST']);
$BLK = (trim($row['BLK']=== "") ? " " : $row['BLK']);
....
....
//64th column
$data = "$STATE$CTY$ST$BLK(to the 64th variable)\r\n";
fwrite($f,$data);
}
fclose($f);
I tried putting a limit to the query:
$qry = "SELECT * FROM tbl_two WHERE STE='48' LIMIT 200000";
Problem is, it just writes until the 200kth line, and it doesn't write the remaining 1.8m lines.
If I don't put a limit to the query, it encounters the error Out of memory .... . TIA for any kind suggestions.
First you need to use buffer query for fetching the data Read it
Queries are using the buffered mode by default. This means that query results are immediately transferred from the MySQL Server to PHP and then are kept in the memory of the PHP process.
Unbuffered MySQL queries execute the query and then return a resource while the data is still waiting on the MySQL server for being fetched. This uses less memory on the PHP-side, but can increase the load on the server. Unless the full result set was fetched from the server no further queries can be sent over the same connection. Unbuffered queries can also be referred to as "use result".
NOTE: buffered queries should be used in cases where you expect only a limited result set or need to know the amount of returned rows before reading all rows. Unbuffered mode should be used when you expect larger results.
Also optimize the array try to put variable directly and you while loop only
pdo = new PDO("mysql:host=localhost;dbname=world", 'my_user', 'my_pass');
$pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
$uresult = $pdo->query("SELECT * FROM tbl_two WHERE STE='48' LIMIT 200000");
if ($uresult) {
$lineno = 0;
while ($row = $uresult->fetch(PDO::FETCH_ASSOC)) {
echo $row['Name'] . PHP_EOL;
// write value in text file
$lineno++;
}
}
Related
My scenario is like this ,i have a huge dataset fetched from mysql table
$data = $somearray; //say the number of records in this array is 200000
iam looping this data,processing some functionalities and writing this data to an excel file
$my_file = 'somefile.csv';
$handle = fopen($my_file, 'w') or die('Cannot open file: ' . $my_file); file
for($i=0;$i<count($data);$i++){
//do something with the data
self::someOtherFunctionalities($data[$i]); //just some function
fwrite($handle, $data[$i]['index']); //here iam writing this data to a file
}
fclose($handle);
My problem is that the loop gets memory exhaustion ...it shows "fatal error allowed memory size of.." is there anyway to process this loop without exhaustion
Due to the server limitation im unable to increase php memory limit like
ini_set("memory_limit","2048M");
Im not concerned about the time it takes..even if it takes hours..so i did set_time_limit(0)
your job is linear and you don't need load all data. use Unbuffered Query also use php://stdout(don't temp file) if send this file to httpClient.
<?php
$mysqli = new mysqli("localhost", "my_user", "my_password", "world");
$uresult = $mysqli->query("SELECT Name FROM City", MYSQLI_USE_RESULT);
$my_file = 'somefile.csv'; // php://stdout
$handle = fopen($my_file, 'w') or die('Cannot open file: ' . $my_file); file
if ($uresult) {
while ($row = $uresult->fetch_assoc()) {
// $row=$data[i]
self::someOtherFunctionalities($row); //just some function
fwrite($handle, $row['index']); //here iam writing this data to a file
}
}
$uresult->close();
?>
Can you use "LIMIT" in your MySQL query?
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
http://dev.mysql.com/doc/refman/5.0/en/select.html
If you don't worry about time, take 1000 rows at a time, and just append the rows to the end of the file, eg. make a temp file that you move and/or rename when the job is done.
First select count(*) from table
then for($i = 0; i < number of row; i = i + 1000){
result = SELECT * FROM table LIMIT i,1000; # Retrieve rows 6-15
append to file = result
}
move and rename the file
this is verry meta code, but the process should work
I want to get the output of a postgres query written into a file. I am using php to connect to the remote database and execute the query. Here is the sample code.
$connection_id=pg_connect("host=localhost dbname=test user=test password=test");
$psql="select example from sample limit 180";
$result=pg_query($connection_id,$psql);
I have the query executed, but I am unable to write it to a file. How do I do that?
Help is really appreciated.
You cannot write the query result into a file directly. The result returned by pg_query is no string with any data that can be printed or written into a file. It's either an error status (false) or some kind of "reference" to result data kept for this query.
If $result isn't ==false and if PostgreSQL could find any rows as a result for your query, then you can fetch these rows. But that's an extra step. It's not included in pg_query. In order to check how many result rows were found you can use the function pg_num_rows.
Then you can iterate through the result set using pg_fetch_assoc. This is only one suitable function. There are a few more, e.g. pg_fetch_row.
Here's some small example code (quick & dirty without much error handling):
<?php
// Set the output of this script to plain text
header("Content-Type: text/plain");
$conn = pg_connect("..."); // insert your data here
if (!$conn) die ("Couldn't connect.");
$result = pg_query($conn, "SELECT example FROM ..."); // TODO
// Check for error and check the number of rows found:
if ((!$result) || (pg_num_rows($result) < 1)) {
pg_close();
echo "Couldn't find any data for your query. Maybe the query is wrong (error) or there are no matching lines.";
exit;
}
// Line counter for screen output
$i = 1;
// Open file. (Important: Script must have write permission for the directory!)
$fileHandle = fopen("./myresults.txt", "w");
// Do this as long as you can find more result rows:
while ($row = pg_fetch_assoc($result)) {
// Write value to the output that is sent to the web browser client:
echo "Line " . $i . ": \"" . strip_tags($row['example']) . "\"\r\n";
// Write the same value as a new line into the opened file:
fwrite ($fileHandle, $row['example'] . "\r\n";
// Increase line number counter:
$i++;
}
// Close the file:
fclose ($fileHandle);
// Free the result / used memory:
pg_free_result($result);
pg_close();
?>
i got a function in PHP to read table from ODBC (to IBM AS400) and write it to a text file on daily basis. it works fine until it reach more than 1GB++. Then it just stop to some rows and didn't write completely.
function write_data_to_txt($table_new, $query)
{
global $path_data;
global $odbc_db, $date2;
if(!($odbc_rs = odbc_exec($odbc_db,$query))) die("Error executing query $query");
$num_cols = odbc_num_fields($odbc_rs);
$path_folder = $path_data.$table_new."/";
if (!file_exists($path_folder)) mkdir ($path_folder,0777);
$filename1 = $path_folder. $table_new. "_" . $date2 . ".txt";
$comma = "|";
$newline = chr(13).chr(10);
$handle = fopen($filename1, "w+");
if (is_writable($filename1)) {
$ctr=0;
while(odbc_fetch_row($odbc_rs))
{
//function for writing all field
// for($i=1; $i<=$num_cols; $i++)
// {
// $data = odbc_result($odbc_rs, $i);
// if (!fwrite($handle, $data) || !fwrite($handle, $comma)) {
// print "Cannot write to file ($filename1)";
// exit;
// }
//}
//end of function writing all field
$data = odbc_result($odbc_rs, 1);
fwrite($handle,$ctr.$comma.$data.$newline);
$ctr++;
}
echo "Write Success. Row = $ctr <br><br>";
}
else
{
echo "Write Failed<br><br>";
}
fclose($handle);
}
no errors, just success message but it should be 3,690,498 rows (and still increase) but i just got roughly 3,670,009 rows
My query is ordinary select like :
select field1 , field2, field3 , field4, fieldetc from table1
What i try and what i assume :
I think it was fwrite limitation so i try not to write all field (just write $ctr and 1st record) but it still stuck in same row.. so i assume its not about fwrite exceed limit..
I try to reduce field i select and it can works completely!! so i assume it have some limitation on odbc.
I try to use same odbc datasource with SQL Server and try to select all field and it give me complete rows. So i assume its not odbc limitation.
Even i try on 64 bits machine but it even worse, it just return roughly 3,145,812 rows.. So i assume it's not about 32/64 bit infrastructure.
I try to increase memory_limit in php ini to 1024mb but it didnt work also..
Is there anyone know if i need to set something in my PHP to odbc connection??
When I run my script I receive the following error before processing all rows of data.
maximum execution time of 30 seconds
exceeded
After researching the problem, I should be able to extend the max_execution_time time which should resolve the problem.
But being in my PHP programming infancy I would like to know if there is a more optimal way of doing my script below, so I do not have to rely on "get out of jail cards".
The script is:
1 Taking a CSV file
2 Cherry picking some columns
3 Trying to insert 10k rows of CSV data into a my SQL table
In my head I think I should be able to insert in chunks, but that is so far beyond my skillset I do not even know how to write one line :\
Many thanks in advance
<?php
function processCSV()
{
global $uploadFile;
include 'dbConnection.inc.php';
dbConnection("xx","xx","xx");
$rowCounter = 0;
$loadLocationCsvUrl = fopen($uploadFile,"r");
if ($loadLocationCsvUrl <> false)
{
while ($locationFile = fgetcsv($loadLocationCsvUrl, ','))
{
$officeId = $locationFile[2];
$country = $locationFile[9];
$country = trim($country);
$country = htmlspecialchars($country);
$open = $locationFile[4];
$open = trim($open);
$open = htmlspecialchars($open);
$insString = "insert into countrytable set officeId='$officeId', countryname='$country', status='$open'";
switch($country)
{
case $country <> 'Country':
if (!mysql_query($insString))
{
echo "<p>error " . mysql_error() . "</p>";
}
break;
}
$rowCounter++;
}
echo "$rowCounter inserted.";
}
fclose($loadLocationCsvUrl);
}
processCSV();
?>
First, in 2011 you do not use mysql_query. You use mysqli or PDO and prepared statements. Then you do not need to figure out how to escape strings for SQL. You used htmlspecialchars which is totally wrong for this purpose. Next, you could use a transaction to speed up many inserts. MySQL also supports multiple interests.
But the best bet would be to use the CSV storage engine. http://dev.mysql.com/doc/refman/5.0/en/csv-storage-engine.html read here. You can instantly load everything into SQL and then manipulate there as you wish. The article also shows the load data infile command.
Well, you could create a single query like this.
$query = "INSERT INTO countrytable (officeId, countryname, status) VALUES ";
$entries = array();
while ($locationFile = fgetcsv($loadLocationCsvUrl, ',')) {
// your code
$entries[] = "('$officeId', '$country', '$open')";
}
$query .= implode(', ', $enties);
mysql_query($query);
But this depends on how long your query will be and what the server limit is set to.
But as you can read in other posts, there are better way for your requirements. But I thougt I should share a way you did thought about.
You can try calling the following function before inserting. This will set the time limit to unlimited instead of the 30 sec default time.
set_time_limit( 0 );
While I build a web page in My PHP web application, My Connection works ok but When I want to get the count of rows of the SELECT Statement I used in my query, It gives me -1 !! although my result set has about 10 rows.
I would like to get the actual number of result set rows.
I searched the PHP Manual & documentation but I do not find a direct way like a Count function or something like that.
I wonder if I have to make a Count(*) SQL Statement in another query and attach it to my Connection to get the Count of Rows ?
Does any one knows an easy and direct way to get that ?
the odbc_num_rows function always gives -1 in result so I can not get the actual number of rows.
My Programming langauge is PHP and My Database Engine is Sybase and The Way to connect to Database is ODBC.
Here are you the Code I used:-
<?PHP
//PHP Code to connect to a certain database using ODBC and getting information from it
//Determining The Database Connection Parameters
$database = 'DatabaseName';
$username = 'UserName';
$password = 'Password';
//Opening the Connection
$conn = odbc_connect($database,$username,$password);
//Checking The Connection
if (!$conn)
{
exit("Connection Failed: " . $conn);
}
//Preparing The Query
$sql = "SELECT * FROM Table1 WHERE Field1='$v_Field1'";
//Executing The Query
$rs = odbc_exec($conn,$sql);
//Checking The Result Set
if (!$rs)
{
exit("Error in SQL");
}
echo "<p align='Center'><h1>The Results</h1></p>";
while ( odbc_fetch_row($rs) )
{
$field1 = odbc_result($rs,1);
$field2 = odbc_result($rs,2);
$field3 = odbc_result($rs,3);
echo "field1 : " . $field1 ;
echo "field2 : " . $field2 ;
echo "field3 : " . $field3 ;
}
$RowNumber = odbc_num_rows($rs);
echo "The Number of Selected Rows = " . $RowsNumber ;
//Closing The Connection
odbc_close($conn);
?>
Thanks for your Help :)
odbc_num_rows seems to be reliable for INSERT, UPDATE, and DELETE queries only.
The manual says:
Using odbc_num_rows() to determine the number of rows available after a SELECT will return -1 with many drivers.
one way around this behaviour is to do a COUNT(*) in SQL instead. See here for an example.
in php.net:
The easy way to count the rows in an odbc resultset where the driver returns -1 is to let SQL do the work:
<?php
$conn = odbc_connect("dsn", "", "");
$rs = odbc_exec($conn, "SELECT Count(*) AS counter FROM tablename WHERE fieldname='" . $value . "'");
$arr = odbc_fetch_array($rs);
echo $arr['counter'];
?>
On what basis, do you expect odbc_num_rows to return anything other than -1 ?
We have the fact from the manuals, that OBDC does not support ##ROWCOUNT / odbc_num_rows. So there is no basis for expecting that it "should" return anything other than that which is documented, -1 in all circumstances.
Even if you used Sybase directly (instead of via ODBC), you would have the same "problem".
odbc_num_rows returns ##ROWCOUNT, which is the rows inserted/updated/deleted by the immediately preceding command. -1 is the correct, documented value, if the immediately preceding command is not insert/update/delete.
It has nothing to do with rows in a table.
Use another batch, and either one of the documented methods to obtain rows in a table, and load the value into a variable:
SELECT #Count = COUNT(*) -- slow, requires I/O
or
SELECT #Count = ROW_COUNT(db_id, object_id) -- very fast, no I/O
Then interrogate the result array, to obtain the variable, not odbc_num_rows, which will continue returning -1.