I'm working on a project for a client - a wordpress plugin that creates and maintains a database of organization members. I'll note that this plugin creates a new table within the wordpress database (instead of dealing with the data as custom_post_type meta data). I've made a lot of modifications to much of the plugin, but I'm having an issue with a feature (that I've left unchanged).
One half of this feature does a csv import and insert, and that works great. The other half of this sequence is a feature to download the contents of this table as a csv. This part works fine on my local system, but fails when running from the server. I've poured over each portion of this script and everything seems to make sense. I'm, frankly, at a loss as to why it's failing.
The php file that contains the logic is simply linked to. The file:
<?php
// initiate wordpress
include('../../../wp-blog-header.php');
// phpinfo();
function fputcsv4($fh, $arr) {
$csv = "";
while (list($key, $val) = each($arr)) {
$val = str_replace('"', '""', $val);
$csv .= '"'.$val.'",';
}
$csv = substr($csv, 0, -1);
$csv .= "\n";
if (!#fwrite($fh, $csv))
return FALSE;
}
//get member info and column data
$table_name = $wpdb->prefix . "member_db";
$year = date ('Y');
$members = $wpdb->get_results("SELECT * FROM ".$table_name, ARRAY_A);
$columns = $wpdb->get_results("SHOW COLUMNS FROM ".$table_name, ARRAY_A);
// echo 'SQL: '.$sql.', RESULT: '.$result.'<br>';
//output headers
header("Content-type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"members.csv\"");
//open output stream
$output = fopen("php://output",'w');
//output column headings
$data[0] = "ID";
$i = 1;
foreach ($columns as $column){
//DIAG: echo '<pre>'; print_r($column); echo '</pre>';
$field_name = '';
$words = explode("_", $column['Field']);
foreach ($words as $word) $field_name .= $word.' ';
if ( $column['Field'] != 'id' && $column['Field'] != 'date_updated' ) {
$data[$i] = ucwords($field_name);
$i++;
}
}
$data[$i] = "Date Updated";
fputcsv4($output, $data);
//output data
foreach ($members as $member){
// echo '<pre>'; print_r($member); echo '</pre>';
$data[0] = $member['id'];
$i = 1;
foreach ($columns as $column){
//DIAG: echo '<pre>'; print_r($column); echo '</pre>';
if ( $column['Field'] != 'id' && $column['Field'] != 'date_updated' ) {
$data[$i] = $member[$column['Field']];
$i++;
}
}
$data[$i] = $member['date_updated'];
//echo '<pre>'; print_r($data); echo '</pre>';
fputcsv4($output, $data);
}
fclose($output);
?>
So, obviously, a routine wherein a query is run, $output is established with fopen, each row is then formatted as comma delimited and fwrited, and finally the file is fclosed where it gets pushed to a local system.
The error that I'm getting (from the server) is
Error 6 (net::ERR_FILE_NOT_FOUND): The file or directory could not be found.
But it clearly is getting found, its just failing. If I enable phpinfo() (PHP Version 5.2.17) at the top of the file, I definitely get a response - notably Cannot modify header information (I'm pretty sure because phpinfo() has already generated a header). All the expected data does get printed to the bottom of the page (after all the phpinfo diagnostics), however, so that much at least is working correctly.
I am guessing there is something preventing the fopen, fwrite, or fclose functions from working properly (a server setting?), but I don't have enough experience with this to identify exactly what the problem is.
I'll note again that this works exactly as expected in my test environment (localhost/XAMPP, netbeans).
Any thoughts would be most appreciated.
update
Ok - spent some more time with this today. I've tried each of the suggested fixes, including #Rudu's writeCSVLine fix and #Fernando Costa's file_put_contents() recommendation. The fact is, they all work locally. Either just echoing or the fopen,fwrite,fclose routine, doesn't matter, works great.
What does seem to be a problem is the inclusion of the wp-blog-header.php at the start of the file and then the additional header() calls. (The path is definitely correct on the server, btw.)
If I comment out the include, I get a csv file downloaded with some errors planted in it (because $wpdb doesn't exist. And if comment out the headers, I get all my data printed to the page.
So... any ideas what could be going on here?
Some obvious conflict of the wordpress environment and the proper creation of a file.
Learning a lot, but no closer to an answer... Thinking I may need to just avoid the wordpress stuff and do a manual sql query.
Ok so I'm wondering why you've taken this approach. Nothing wrong with php://output but all it does is allow you to write to the output buffer the same way as print and echo... if you're having trouble with it, just use print or echo :) Any optimizations you could have got from using fwrite on the stream then gets lost by you string-building the $csv variable and then writing that in one go to the output stream (Not that optimizations are particularly necessary). All that in mind my solution (in keeping with your original design) would be this:
function escapeCSVcell($val) {
return str_replace('"','""',$val);
//What about new lines in values? Perhaps not relevant to your
// data but they'll mess up your output ;)
}
function writeCSVLine($arr) {
$first=true;
foreach ($arr as $v) {
if (!$first) {echo ",";}
$first=false;
echo "\"".escapeCSVcell($v)."\"";
}
echo "\n"; // May want to use \r\n depending on consuming script
}
Now use writeCSVLine in place of fputcsv4.
Ran into this same issue. Stumbled upon this thread which does the same thing but hooks into the 'plugins_loaded' action and exports the CSV then. https://wordpress.stackexchange.com/questions/3480/how-can-i-force-a-file-download-in-the-wordpress-backend
Exporting the CSV early eliminates the risk of the headers already being modified before you get to them.
Related
I've been using all sorts of hacks to generate file indexes out of SMB shares. And it's all cool with basic filepath plus metadata indexing.
The next step I want to implement is an algorithm combining some unix-like utilities and php, to index specific context from within files.
Now the first step in this context generation is something like this
while read p; do egrep -rH '^;|\(|^\(|\)$' "$p"; done <textual.txt > text_context_search.txt
This is specific regexing for my purpose for indexing contents of programs, this extracts lines that are whole comments or contains comments out of CNC program files.
resulting output is something like
file_path:regex_hit
now obviously most programs has more than one comment, so theres too much redundancy not only in repetition, but an exhaustive context index is about a gigabyte in size
I am now working towards script that would compact redudancy in such pattern
file_path_1:regex_hit_1
file_path_1:regex_hit_2
file_path_1:regex_hit_3
...
would become:
file_path_1:regex_hit1,regex_hit_2,regex_hit3
and if I succeed to do this in efficient manner its all ok.
The problem here is whether I'm doing this in a proper way. Maybe I should be using different tools to generate such context index in the first place ?
EDIT
After further copying and pasting from stack overflow and thinking about it I glued up solution using not my code, that nearly entirely solves my previously mentioned issue.
<?php
// https://stackoverflow.com/questions/26238299/merging-csv-lines-where-column-value-is-the-same
$rows = array_map('str_getcsv', file('text_context_search2.1.txt'));
//echo '<pre>';
print_r($csv);
//echo '</pre>';
// Array for output
$concatenated = array();
// Key to organize over
$sortKey = '0';
// Key to concatenate
$concatenateKey = '1';
// Separator string
$separator = ' ';
foreach($rows as $row) {
// Guard against invalid rows
if (!isset($row[$sortKey]) || !isset($row[$concatenateKey])) {
continue;
}
// Current identifier
$identifier = $row[$sortKey];
if (!isset($concatenated[$identifier])) {
// If no matching row has been found yet, create a new item in the
// concatenated output array
$concatenated[$identifier] = $row;
} else {
// An array has already been set, append the concatenate value
$concatenated[$identifier][$concatenateKey] .= $separator . $row[$concatenateKey];
}
}
// Do something useful with the output
//var_dump($concatenated);
//echo json_encode($concatenated)."\n";
$fp = fopen('exemplar.csv', 'w');
foreach ($concatenated as $fields) {
fputcsv($fp, $fields);
}
fclose($fp);
I'm developing an app where user upload excel [.xlsx] file for dumping data into MySQL database. I have programmed in such a way that there is a LOG created for each import. So that user can see if there is any error occurred and etc.. My script was working perfectly before implementing the log system.
After implementing the log system i can see duplicate rows inserted into database. Also die() command is not working.
It just keep looping continuously!
I have written sample code below. Please tell whats wrong in my logging method.
Note: if i remove logging [Writing into file] script works correctly.
$file = fopen("20131105.txt", "a");
fwrite($file, "LOG CREATED".PHP_EOL);
foreach($hdr as $k => $v) {
$username = $v['un'];
$address = $v['adr'];
$message = $v['msg'];
if($username == '') {
fwrite($file, 'Error: Missing User Name'.PHP_EOL);
continue;
} else {
// insert into database
}
}
fwrite($file, PHP_EOL."LOG CLOSED");
fclose($file);
echo 1;
die();
First, your die statement is after your loop. It needs to be inside your loop to end it;
Second, you're looping over $hdr. It's not defined in your snippet tho. It has to be an array. What does it contain?
var_dump($hdr);
The documentation for foreach as given in php manual highlights
"Reference of a $value and the last array element remain even after the foreach loop. It is recommended to destroy it by unset()."[1].
Try unsetting the values in foreach using unset($value) . This might be the reason for duplicate values.
So I was hoping to be able to get by with a simple solution to read records from a database and save them to a text file that the user downloads. I have been doing this on the fly and for under 20,000 records, this works great. Over 20,000 records and I'm loading too much data into memory and PHP hits a fatal error.
My thought was to just grab everything in chunks. So I grab XX number of rows and echo them to the file and then loop to get the next XX rows until I'm done.
I am just echoing the results right now though, not building the file and then sending it for download, which I'm guessing I'll have to do.
The issue at this point succinctly is that with up to 20,000 rows, the file builds and downloads perfectly. With more than that, I get an empty file.
The code:
header('Content-type: application/txt');
header('Content-Disposition: attachment; filename="export.'.$file_type.'"');
header('Expires: 0');
header('Cache-Control: must-revalidate');
// I do other things to check for records before, hence the do-while loop
$this->items = $model->getItems();
do {
foreach ($this->items as $k => $item) {
$i=0;
$tables = count($this->data['column']);
foreach ($this->data['column'] as $table => $fields) {
$columns = count($fields);
$j = 0;
foreach ($fields as $field => $junk) {
if ($quote_output) {
echo '"'.ucwords(str_replace(array('"'), array('\"'), $item->$field)).'"';
} else {
echo ''.$item->$field.'';
}
$j++;
if ($j<$columns) {
echo $delim;
}
}
$i++;
if ($i<$tables) {
echo $delim;
}
}
echo "\n";
}
} while($this->items = $this->_model->getItems());
Very large tables won't work that way.
You have to output the data as you read it from the database. If you need to sorted, then use the database ORDER BY for that purpose.
So more or less
// assuming you use a var such as $query to handle the DB
while(!$query->eof())
{
$fields = $query->read_next();
echo $fields; // with your formatting, maybe call a function...
}
The empty result is normal. If the memory is exhausted before any echo happens then nothing was sent to the browser.
Note also that PHP has a time limit (a watchdog) that you may need to tweak. The default is defined in your php.ini. You may set it to zero if you expect the tables to grow very much.
You should change your str_replace for addslashes(). This will probably free some memory.
Then I suggest you to save a file and use php file functions to do so: fopen() or file_put_contents().
I hope that might help you!
Actually, this might be simple fix. If PHP is running out of memory it's probably because the output buffer is filling before the file is sent. If so, simply flush() at regular intervals.
This will flush after each line:
do {
foreach(...) {
// assemble your output line here
}
echo "\n";
flush();
}
} while($this->items = $this->_model->getItems());
Flushing after each line might prove too slow, in which case add a counter and flush after every hundred, or whatever works best.
I am facing this problem some past days and now frustrate because I have to do it.
I need to update my CSV file columns name with database table header. My database table fields are different from CSV file. Now the problem is that first I want to update column name of CSV file with database table headers and then import its data with field mapping into database.
Please help me I don't know how I can solve this.
This is my php code:
$file = $_POST['fileName'];
$filename = "../files/" . $file;
$list = $_POST['str'];
$array_imp = implode(',', $list);
$array_exp = explode(',', $array_imp);
$fp = fopen("../files/" . $file, "w");
$num = count($fp);
for ($c = 0; $c < $num; $c++) {
if ($fp[c] !== '') {
fputcsv($fp, $array_exp);
}
}
fclose($fp);
require_once("../csv/DataSource.php");
$path = "../files/test_mysql.csv";
$dbtable = $ext[0];
$csv = new File_CSV_DataSource;
$csv->load($path);
$csvData = $csv->connect();
$res='';
foreach($csvData as $key)
{ print_r($key[1]);
$myKey ='';
$myVal='';
foreach($key as $k=>$v)
{
$myKey .=$k.',';
$myVal .="'".$v."',";
}
$myKey = substr($myKey, 0, -1);
$myVal = substr($myVal, 0, -1);
$query="insert into tablename($myKey)values($myVal)";
$res= mysql_query($query);
You have got an existing file of which the first line needs to be replaced.
This has been generally outlined here:
Overwrite Line in File with PHP
Some little explanation (and some tips that are not covered in the other question). Most often it's easier to operate with two files here:
The existing file (to be copied from)
A new file that temporarily will be used to write into.
When done, the old file will be deleted and the new file will be renamed to the name of the old file.
Your code does not work because you are already writing the new first line into the old file. That will chop-off the rest of the file when you close it.
Also you look misguided about some basic PHP features, e.g. using count on a file-handle does not help you to get the number of lines. It will just return 1.
Here is step by step what you need to do:
Open the existing file to read from. Just read the first line of it to advance the file-pointer (fgets)
Open a new file to write into. Write the new headers into it (as you already successfully do).
Copy all remaining data from the first file into the new, second file. PHP has a function for that, it is called stream_copy_to_stream.
Close both files.
Now check if the new file is what you're looking for. When this all works, you need to add some more steps:
Rename the original file to a new name. This can be done with rename.
Rename the file you've been written to to the original filename.
If you want, you then can delete the file you renamed in 5. - but only if you don't need it any longer.
And that's it. I hope this is helpful. The PHP manual contains example code for all the functions mentioned and linked. Good luck. And if you don't understand your own code, use the manual to read about it first. That reduces the places where you can introduce errors.
If you are managing to insert the table headers then you're half way there.
It sounds to me like you need to append the data after the headers something like:
$data = $headers;
if($fp[c]!=='')
{
$data .= fputcsv($fp, $array_exp);
}
Notice the dot '.' before the equals '=' in the if statement. This will add none blank $fp[c]values after the headers.
I have been struggling to create a Simple ( really simple ) chat system for my website as my knowledge on Javascripting/AJAX are Limited after gather resources and help from many kind people I was able to create my simple chat system but left with one problem.
The messages are posted to a file called "msg.html" in this format :
<p><span id="name">$name</span><span id="Msg">$message</span></p>
And then using PHP and AJAX I will retrieve the messages instantly from the file using the
file(); function and a foreach(){} loop withing PHP here is the code :
<?php
$file = 'msg.html';
$data = file($file);
$max_lines = 20;
if(count($data) > $max_lines){
// here i want the data to be deleted from oldest until i only have 20 messages left.
}
foreach($data as $line_num => $line){
echo $line_num . " . " . $line;
}
?>
My Question is how can i delete the oldest messages so that i am only left with the latest 20 Messages ?
How does something like this seem to you:
$file = 'msg.html';
$data = file($file);
$max_lines = 20;
foreach($data as $line_num => $line)
{
if ($line_num < $max_lines)
{
echo $line_num . " . " . $line;
}
else
{
unset($data[$line_num]);
}
}
file_put_contents('msg.html', $data);
?>
http://www.php.net/manual/en/function.file-put-contents.php for more info :)
I suppose you can read the file, explode it into an array, chop off everything but last 20 fields and write it back to file, overwriting the old one... Perhaps not the best solution but one that comes to mind if you really cant use database as Delan suggested
That's called round-robin if I recall correctly.
As far as I know, you can't remove arbitrary portions of a file. You need to overwrite the file with the new contents (or create a new file and remove the old one). You could also store messages in individual files but of course that implies up to $max_lines files to read.
You should also use flock() to avoid data corruption. Depending on the platform it's not 100% reliable but it's better than nothing.