I want to get the output of a postgres query written into a file. I am using php to connect to the remote database and execute the query. Here is the sample code.
$connection_id=pg_connect("host=localhost dbname=test user=test password=test");
$psql="select example from sample limit 180";
$result=pg_query($connection_id,$psql);
I have the query executed, but I am unable to write it to a file. How do I do that?
Help is really appreciated.
You cannot write the query result into a file directly. The result returned by pg_query is no string with any data that can be printed or written into a file. It's either an error status (false) or some kind of "reference" to result data kept for this query.
If $result isn't ==false and if PostgreSQL could find any rows as a result for your query, then you can fetch these rows. But that's an extra step. It's not included in pg_query. In order to check how many result rows were found you can use the function pg_num_rows.
Then you can iterate through the result set using pg_fetch_assoc. This is only one suitable function. There are a few more, e.g. pg_fetch_row.
Here's some small example code (quick & dirty without much error handling):
<?php
// Set the output of this script to plain text
header("Content-Type: text/plain");
$conn = pg_connect("..."); // insert your data here
if (!$conn) die ("Couldn't connect.");
$result = pg_query($conn, "SELECT example FROM ..."); // TODO
// Check for error and check the number of rows found:
if ((!$result) || (pg_num_rows($result) < 1)) {
pg_close();
echo "Couldn't find any data for your query. Maybe the query is wrong (error) or there are no matching lines.";
exit;
}
// Line counter for screen output
$i = 1;
// Open file. (Important: Script must have write permission for the directory!)
$fileHandle = fopen("./myresults.txt", "w");
// Do this as long as you can find more result rows:
while ($row = pg_fetch_assoc($result)) {
// Write value to the output that is sent to the web browser client:
echo "Line " . $i . ": \"" . strip_tags($row['example']) . "\"\r\n";
// Write the same value as a new line into the opened file:
fwrite ($fileHandle, $row['example'] . "\r\n";
// Increase line number counter:
$i++;
}
// Close the file:
fclose ($fileHandle);
// Free the result / used memory:
pg_free_result($result);
pg_close();
?>
Related
I have a query selects all from the database table and writes it to a text file. If the state is small (say max of 200k rows), the code still works and writes it to the text file. Problem arises when I have a state that has 2M rows when queried, then there's also the fact that the table has 64 columns.
Here's a part of the code:
create and open file
$file = "file2.txt";
$fOpen = fopen($file, "a"); // Open file, write and append
$qry = "SELECT * FROM tbl_two WHERE STE='48'";
$res = mysqli_query($con, $qry);
if(!$res) {
echo "No data record" . "<br/>";
exit;
}
$num_res =mysqli_num_rows($res);
for ($i=0; $i<=$num_res; $i++) {
$row = mysqli_fetch_assoc ($res);
$STATE = (trim($row['STATE'] === "") ? " " : $row['STATE']);
$CTY = (trim($row['CTY']=== "") ? " " : $row['CTY']);
$ST = (trim($row['ST']=== "") ? " " : $row['ST']);
$BLK = (trim($row['BLK']=== "") ? " " : $row['BLK']);
....
....
//64th column
$data = "$STATE$CTY$ST$BLK(to the 64th variable)\r\n";
fwrite($f,$data);
}
fclose($f);
I tried putting a limit to the query:
$qry = "SELECT * FROM tbl_two WHERE STE='48' LIMIT 200000";
Problem is, it just writes until the 200kth line, and it doesn't write the remaining 1.8m lines.
If I don't put a limit to the query, it encounters the error Out of memory .... . TIA for any kind suggestions.
First you need to use buffer query for fetching the data Read it
Queries are using the buffered mode by default. This means that query results are immediately transferred from the MySQL Server to PHP and then are kept in the memory of the PHP process.
Unbuffered MySQL queries execute the query and then return a resource while the data is still waiting on the MySQL server for being fetched. This uses less memory on the PHP-side, but can increase the load on the server. Unless the full result set was fetched from the server no further queries can be sent over the same connection. Unbuffered queries can also be referred to as "use result".
NOTE: buffered queries should be used in cases where you expect only a limited result set or need to know the amount of returned rows before reading all rows. Unbuffered mode should be used when you expect larger results.
Also optimize the array try to put variable directly and you while loop only
pdo = new PDO("mysql:host=localhost;dbname=world", 'my_user', 'my_pass');
$pdo->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
$uresult = $pdo->query("SELECT * FROM tbl_two WHERE STE='48' LIMIT 200000");
if ($uresult) {
$lineno = 0;
while ($row = $uresult->fetch(PDO::FETCH_ASSOC)) {
echo $row['Name'] . PHP_EOL;
// write value in text file
$lineno++;
}
}
My scenario is like this ,i have a huge dataset fetched from mysql table
$data = $somearray; //say the number of records in this array is 200000
iam looping this data,processing some functionalities and writing this data to an excel file
$my_file = 'somefile.csv';
$handle = fopen($my_file, 'w') or die('Cannot open file: ' . $my_file); file
for($i=0;$i<count($data);$i++){
//do something with the data
self::someOtherFunctionalities($data[$i]); //just some function
fwrite($handle, $data[$i]['index']); //here iam writing this data to a file
}
fclose($handle);
My problem is that the loop gets memory exhaustion ...it shows "fatal error allowed memory size of.." is there anyway to process this loop without exhaustion
Due to the server limitation im unable to increase php memory limit like
ini_set("memory_limit","2048M");
Im not concerned about the time it takes..even if it takes hours..so i did set_time_limit(0)
your job is linear and you don't need load all data. use Unbuffered Query also use php://stdout(don't temp file) if send this file to httpClient.
<?php
$mysqli = new mysqli("localhost", "my_user", "my_password", "world");
$uresult = $mysqli->query("SELECT Name FROM City", MYSQLI_USE_RESULT);
$my_file = 'somefile.csv'; // php://stdout
$handle = fopen($my_file, 'w') or die('Cannot open file: ' . $my_file); file
if ($uresult) {
while ($row = $uresult->fetch_assoc()) {
// $row=$data[i]
self::someOtherFunctionalities($row); //just some function
fwrite($handle, $row['index']); //here iam writing this data to a file
}
}
$uresult->close();
?>
Can you use "LIMIT" in your MySQL query?
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
http://dev.mysql.com/doc/refman/5.0/en/select.html
If you don't worry about time, take 1000 rows at a time, and just append the rows to the end of the file, eg. make a temp file that you move and/or rename when the job is done.
First select count(*) from table
then for($i = 0; i < number of row; i = i + 1000){
result = SELECT * FROM table LIMIT i,1000; # Retrieve rows 6-15
append to file = result
}
move and rename the file
this is verry meta code, but the process should work
i got a function in PHP to read table from ODBC (to IBM AS400) and write it to a text file on daily basis. it works fine until it reach more than 1GB++. Then it just stop to some rows and didn't write completely.
function write_data_to_txt($table_new, $query)
{
global $path_data;
global $odbc_db, $date2;
if(!($odbc_rs = odbc_exec($odbc_db,$query))) die("Error executing query $query");
$num_cols = odbc_num_fields($odbc_rs);
$path_folder = $path_data.$table_new."/";
if (!file_exists($path_folder)) mkdir ($path_folder,0777);
$filename1 = $path_folder. $table_new. "_" . $date2 . ".txt";
$comma = "|";
$newline = chr(13).chr(10);
$handle = fopen($filename1, "w+");
if (is_writable($filename1)) {
$ctr=0;
while(odbc_fetch_row($odbc_rs))
{
//function for writing all field
// for($i=1; $i<=$num_cols; $i++)
// {
// $data = odbc_result($odbc_rs, $i);
// if (!fwrite($handle, $data) || !fwrite($handle, $comma)) {
// print "Cannot write to file ($filename1)";
// exit;
// }
//}
//end of function writing all field
$data = odbc_result($odbc_rs, 1);
fwrite($handle,$ctr.$comma.$data.$newline);
$ctr++;
}
echo "Write Success. Row = $ctr <br><br>";
}
else
{
echo "Write Failed<br><br>";
}
fclose($handle);
}
no errors, just success message but it should be 3,690,498 rows (and still increase) but i just got roughly 3,670,009 rows
My query is ordinary select like :
select field1 , field2, field3 , field4, fieldetc from table1
What i try and what i assume :
I think it was fwrite limitation so i try not to write all field (just write $ctr and 1st record) but it still stuck in same row.. so i assume its not about fwrite exceed limit..
I try to reduce field i select and it can works completely!! so i assume it have some limitation on odbc.
I try to use same odbc datasource with SQL Server and try to select all field and it give me complete rows. So i assume its not odbc limitation.
Even i try on 64 bits machine but it even worse, it just return roughly 3,145,812 rows.. So i assume it's not about 32/64 bit infrastructure.
I try to increase memory_limit in php ini to 1024mb but it didnt work also..
Is there anyone know if i need to set something in my PHP to odbc connection??
I have this issue where cron runs a php script every 5 minutes to update a list.
However, the list fails to update 5% of the time, and the list ends up blank. I don't believe it's related to cron, because I think I failed to manually generate the list twice out of like 100 tries.
What I believe it's related to is when the site has like 50+ people on it, it will fail to generate, perhaps being related to the server being busy. I added a check to make sure it's not MySQL not returning rows (which seems impossible) but it still does it leads me to believe fwrite is failing.
<?
$fileHandle = fopen("latest.html", 'w');
$links = array();
$query1 = $db_conn -> query("SELECT * FROM `views` ORDER BY `date` DESC LIMIT 0,20");
while ($result1 = $db_conn -> fetch_row($query1))
{
$result2 = $db_conn -> fetch_query("SELECT * FROM `title` WHERE `id` = '" . $result1['id'] . "'");
array_push($links, "<a href='/title/" . $result2['title'] . "'>" . $result2['title'] . "</a>");
}
if (count($links) > 0)
fwrite($fileHandle, implode(" • ", $links));
else
echo "Didn't work!";
fclose($fileHandle);
?>
Could there be a slight chance the file is in use so it ends up not working and writing a blank list?
$fileHandle = "latest.html", 'w');
I'm going to assume you mean
$fileHandle = fopen("latest.html", 'w');
the 'w' here opens the file, places the cursor at the start and truncates the file to zero length.
If you check count($links) before doing this you wont truncate the file when there is nothing to be written to it.
<?php
$links = "QUERY HERE AND HANDLE THE RESULTS (REMOVED)";
if (count($links) > 0)
{
$fileHandle = fopen("latest.html", 'w');
fwrite($fileHandle, implode(" • ", $links));
fclose($fileHandle);
}
else
{
echo "Didn't work!";
}
?>
Could there be a slight chance the file is in use so it ends up not
working and writing a blank list?
Well, yes. We don't know what other code you run that manipulates latest.html, so we can't really profile it.
Here are some suggestions:
Fix the syntax error in your file handler creation
You can acquire a fopen('w') handler to a file that has an existing fopen('r') process going on, so be sure to use PHP's flock while writing to the file to ensure other processes don't corrupt your list
Check to see what your logs have to say
Write to a string, then fwrite the entire string, so you spend less time in your inner loop with your file handler open (especially in this case where it doesn't eem that the string would be that long -- list of links)
Try outputting your links (datetamped) to a separate file besides latest.html; in the 5% chance when it fails, look back at the timetamped links and see how they compare. You can also include your query in that file so you can isolate if the issue is somthing to do with the DB or to do with writing to latest.html -- this will be especially useful in the case where your query (which isn't shown) possibly returns no results.
I think you are leaving yourself open to the possibility that the query is returning no data. The "removed" logic from your example may help shed light on what's going on. A good way of figuring this out is to write something to a log file, and check that log file after a few dozen iterations of your script. In the interest of having something in your latest.html file, I'd use file_put_contents over your current code.
<?php
$links = array();
$query = "SELECT links FROM tableA";
$result = mysql_query($links);
while ($row = mysql_fetch_row($result)) {
$links[] = $row[0];
}
if (count($links) > 0) {
file_put_contents('latest.html', implode(" * ", $links));
file_put_contents('linkupdate.log', "got links: " . count($links) . "\n", FILE_APPEND);
} else {
file_put_contents('linkupdate.log', "No links? [(" . mysql_errno() . ") " . mysql_error() . "]\n", FILE_APPEND);
}
?>
If we find no links, we won't overwrite the previous data file. If we encounter a MySQL error that might be causing the problem, it'll show up in the log output.
A read on the file shouldn't block a write, but switching to file_put_contents will help reduce the time the file is open and empty (there is some latency while you're performing the query and fetching the results).
Feel free to anonymize your query and post that as well - you definitely could have a problem with the result set since your code otherwise seems like it ought to work.
I'm converting a site from MySQL to Postgres and have a really weird bug. This code worked as-is before I switched the RDBMS. In the following loop:
foreach ($records as $record) {
print "<li> <a href = 'article.php?doc={$record['docid']}'> {$record['title']} </a> by ";
// Get list of authors and priorities
$authors = queryDB($link, "SELECT userid FROM $authTable WHERE docid='{$record['docid']}' AND role='author' ORDER BY priority");
// Print small version of author list
printAuthors($authors, false);
// Print (prettily) the status
print ' (' . nameStatus($record['status']) . ") </li>\n";
}
the FIRST query is fine. Subsequent calls don't work (pg_query returns false in the helper function, so it dies). The code for queryDB is the following:
function queryDB($link, $query) {
$result = pg_query($link, $query) or die("Could not query db! Statement $query failed: " . pg_last_error($link));
// Push each result into an array
while( $line = pg_fetch_assoc($result)) {
$retarray[] = $line;
}
pg_free_result($result);
return $retarray;
}
The really strange part: when I copy the query and run it with psql (as the same user that PHP's connecting with) everything runs fine. OR if I copy the meat of queryDB into my loop in place of the function call, I get the correct result. So how is this wrapper causing bugs?
Thanks!
I discovered that there was no error output due to having my php.ini misconfigured; after turning errors back on I started getting output, I got things like18 is not a valid PostgreSQL link resource. Changing my connect code to use pg_pconnect() (the persistent version) fixed this. (Found this idea here.)
Thanks to everyone who took a look and tried to help!