How to send a file as string with phpmailer?
The file contents is stored in Mysql as a BLOB, but when sending the mail the filesize is only 2 bytes? In the database the size is about 30kb?
$phpmailer->AddStringAttachment(
base64_encode($row['file_data']),
$row['file_name'],
'base64',
$row['file_type']
);
The data is fetched directly from the mysql database without any processing...
This will display the image in the browser
header('Content-type: '.$row['file_type']);
echo $row['file_data'];
Firstly, I imagine you probably meant base64_encode() rather than decode?
However, my guess is that you probably don't want to be encoding it at all -- phpMailer handles the encoding internally for you, so you shouldn't need to do any base64 encoding yourself.
So I think the correct answer is simply to pass the data to the mailer without doing any encoding at all.
$phpmailer->AddStringAttachment(
$row['file_data'],
$row['file_name'],
'base64',
$row['file_type']
);
Hope that helps.
Related
I'm using TCPDF using
$base64String = $pdf->Output('file.pdf', 'E');
So I can send the data via AJAX
The only problem is that it comes with header information in addition to the Base64 string
Content-Type: application/pdf;
name="FILE-31154d59f28c63efae86e4f3d6a00e13.pdf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="FILE-31154d59f28c63efae86e4f3d6a00e13.pdf"
So if I take the string that is created to base64_decode() or use with phpMailer in my case it errors. Is it possible to remove the headers so I only have the base64 string?
(The error is that the pdf can't be read by any PDF reader when opened)
I thought I'd be able to find something that solves this but I haven't found anything!!
UPDATE
This is what I've put in place to solve the issue
$base64String = preg_replace('/Content-[\s\S]+?;/', '', $base64String);
$base64String = preg_replace('/name=[\s\S]+?pdf"/', '', $base64String);
$base64String = preg_replace('/filename=[\s\S]+?"/', '', $base64String);
However it's not very elegant! So if anyone has a better solution please post it below :)
TCPDF docs are huge but unusable – it's easier to read the source code directly. It has those extra headers because you're asking for them by using the E output mode, which is intended for generating email messages.
For sending the PDF data as a PHPMailer attachment, you want the straight binary PDF data as a string, as provided by the S output mode, which you can pass straight into addStringAttachment(), and PHPMailer will handle all the encoding for you. All you have to do is this:
$mail->addStringAttachment($pdf->Output('file.pdf', 'S'), 'file.pdf');
To convert the PDF binary into base64, for example to us it in a JSON string, simply pass it through base64_encode:
$base64String = base64_encode($pdf->Output('file.pdf', 'S'));
I have a PHP webservice which currently returns a zip archive as its only output. I'm reading the zip archive from disk using file_get_contents and sending it back as the body of the response.
I'd like it to return some additional metadata, in a JSON format:
{
"generatedDate": "2012-11-28 12:00:00",
"status": "unchanged",
"rawData": <zip file in raw form>
}
The iOS app which talks to this service will receive this response, parse the JSON, and then store the zip file locally for its own use.
However, if I try to stuff the result of file_get_contents into json_encode, it rightfully complains that the string is not in UTF-8 format. If I UTF-8-encode it using mb_convert_encoding($rawData, 'UTF-8',
mb_detect_encoding($rawData, 'UTF-8, ISO-8859-1', true));, it will encode it happily, but I can't find a way to reverse the operation on the client (calling [dataString dataUsingEncoding:NSUTF8StringEncoding] and then treating the result as a zip file fails with BOM could not extract archive: Couldn't read pkzip local header.
Can anyone suggest a good way to insert a blob of raw data as one field in a JSON response?
Surely if you successfully included the raw data in the JSON then you'd have the opposite problem at the other end, when you try to decode the JSON and whatever you use to decode can't handle the raw data?
Instead, I would suggest that you send the raw data only in the response body, and use headers to send the metadata.
Strike this question.
It turns out that UTF-8 encoding raw data like this is nonstandard at best, and the standard solution is base-64 encoding it and then using a base-64 decoder to recover it on the client:
$this->response(200, array('rawData' => base64_encode($rawData)));
...
NSString *rawDataString = [[response responseJSON] objectForKey:#"rawData"];
NSData *rawData = [Base64 decode:rawDataString];
ZIP archives are not text—they are binary files! Trying to convert your archive from ISO-8859-1 to UTF-8 makes as much sense as trying to rotate it.
There're several algorithms to serialize binary streams as text but they'll all increase the file size. If that's not an issue, have a look at:
base64_encode()
bin2hex()
unpack()
I'm retrieving data from my Postgres DB in UTF-8. The db and the client_connection settings are in UTF-8.
Then I send 2 headers to the visitor:
header("Content-Type: application/msexcel");
header("Content-Disposition: $mode; filename=export.xls");
and start outputting plain text data in a CSV-manner. This will open as a simple Excel file on the visitors desktop.
$cols = array ("col1", "col2", "col3");
echo implode("\t", $cols)."\r\n";
Works fine, untill special characters like é, è etc are encountered.
I tried changing my client_encoding while retrieving the data from the db to latin-1, which works in most cases but not for all chars. So that is not a solution.
How could I send the outputted file as UTF-8? I don't think converting the data from the db to latin-1 is possible, since the char seems unknown in latin-1 ... so I need Excel to treat the file as UTF-8
I'd look into using the PHPExcel engine. It uses UTF-8 as default and it can generate a whole list of spreadsheet file types (Excel, OpenOffice, CSV, etc.).
I would recommend not sending plain-text and masquerading it as Excel. XLS files are typically binary, and while binary isn't required, the official Excel method of using non-binary data is to format it as XML.
You mention "CSV" in the title, but nothing about your problem includes anything related to CSV. I bring this up because I believe that you should actually change your tabs to commas, and then you could simply output a standard .csv file, which is read by Excel still but doesn't rely on undocumented or unstable functionality.
If you truly want to send application/msexcel, then you should use a real Excel library, because currently, you are not creating a real Excel file.
use ; charset=UTF-8 after aplication/xxxxxx I do use:
header("Content-Type: application/vnd.ms-excel; charset=UTF-8");
// header("Content-Length: " . strlen($thecontent)); // this is not mandatory
header('Content-Disposition: attachment; filename="file.xls"');
Try mb_convert_encoding function.
Try to use iconv, for converting string into required charset.
Have you tried utf8_encode() the string?
So something like: echo implode("\t", utf8_encode($cols)."\r\n")
Not sure if that would work, but give it a go
I am building a data import tool for the admin section of a website I am working on. The data is in both French and English, and contains many accented characters. Whenever I attempt to upload a file, parse the data, and store it in my MySQL database, the accents are replaced with '?'.
I have text files containing data (charset is iso-8859-1) which I upload to my server using CodeIgniter's file upload library. I then read the file in PHP.
My code is similar to this:
$this->upload->do_upload()
$data = array('upload_data' => $this->upload->data());
$fileHandle = fopen($data['upload_data']['full_path'], "r");
while (($line = fgets($fileHandle)) !== false) {
echo $line;
}
This produces lines with accents replaced with '?'. Everything else is correct.
If I download my uploaded file from my server over FTP, the charset is still iso-8850-1, but a diff reveals that the file has changed. However, if I open the file in TextEdit, it displays properly.
I attempted to use PHP's stream_encoding method to explicitly set my file stream to iso-8859-1, but my build of PHP does not have the method.
After running out of ideas, I tried wrapping my strings in both utf8_encode and utf8_decode. Neither worked.
If anyone has any suggestions about things I could try, I would be extremely grateful.
It's Important to see if the corruption is happening before or after the query is being issued to mySQL. There are too many possible things happening here to be able to pinpoint it. Are you able to output your MySql to check this?
Assuming that your query IS properly formed (no corruption at the stage the query is being outputted) there are a couple of things that you should check.
What is the character encoding of the database itself? (collation)
What is the Charset of the connection - this may not be set up correctly in your mysql config and can be manually set using the 'SET NAMES' command
In my own application I issue a 'SET NAMES utf8' as my first query after establishing a connection as I am unable to change the MySQL config.
See this.
http://dev.mysql.com/doc/refman/5.0/en/charset-connection.html
Edit: If the issue is not related to mysql I'd check the following
You say the encoding of the file is 'charset is iso-8859-1' - can I ask how you are sure of this?
What happens if you save the file itself as utf8 (Without BOM) and try to reprocess it?
What is the encoding of the php file that is performing the conversion? (What are you using to write your php - it may be 'managing' this for you in an undesired way)
(an aside) Are the files you are processing suitable for processing using fgetcsv instead?
http://php.net/manual/en/function.fgetcsv.php
Files uploaded to your server should be returned the same on download. That means, the encoding of the file (which is just a bunch of binary data) should not be changed. Instead you should take care that you are able to store the binary information of that file unchanged.
To achieve that with your database, create a BLOB field. That's the right column type for it. It's just binary data.
Assuming you're using MySQL, this is the reference: The BLOB and TEXT Types, look out for BLOB.
The problem is that you are using iso-8859-1 instead of utf-8. In order to encode it in the correct charset, you should use the iconv function, like so:
$output_string = iconv('utf-8", "utf-8//TRANSLIT", $input_string);
iso-8859-1 does not have the encoding for any sort of accents.
It would be so much better if everything were utf-8, as it handles virtually every character known to man.
I am creating a file using php fwrite() and I know all my data is in UTF8 ( I have done extensive testing on this - when saving data to db and outputting on normal webpage all work fine and report as utf8.), but I am being told the file I am outputting contains non utf8 data :( Is there a command in bash (CentOS) to check the format of a file?
When using vim it shows the content as:
Donâ~#~Yt do anything .... Itâ~#~Ys a
great site with
everything....Weâ~#~Yve only just
launched/
Any help would be appreciated: Either confirming the file is UTF8 or how to write utf8 content to a file.
UPDATE
To clarify how I know I have data in UTF8 i have done the following:
DB is set to utf8 When saving data
to database I run this first:
$enc = mb_detect_encoding($data);
$data = mb_convert_encoding($data, "UTF-8", $enc);
Just before I run fwrite i have checked the data with Note each piece of data returns 'IS utf-8'
if (strlen($data)==mb_strlen($data, 'UTF-8')) print 'NOT UTF-8';
else print 'IS utf-8';
Thanks!
If you know the data is in UTF8 than you want to set up the header.
I wrote a solution answering to another tread.
The solution is the following: As the UTF-8 byte-order mark is \xef\xbb\xbf we should add it to the document's header.
<?php
function writeStringToFile($file, $string){
$f=fopen($file, "wb");
$file="\xEF\xBB\xBF".$file; // this is what makes the magic
fputs($f, $string);
fclose($f);
}
?>
You can adapt it to your code, basically you just want to make sure that you write a UTF8 file (as you said you know your content is UTF8 encoded).
fwrite() is not binary safe. That means, that your data - be it correctly encoded or not - might get mangled by this command or it's underlying routines.
To be on the safe side, you should use fopen() with the binary mode flag. that's b. Afterwards, fwrite() will safe your string data "as-is", and that is in PHP until now binary data, because strings in PHP are binary strings.
Background: Some systems differ between text and binary data. The binary flag will explicitly command PHP on such systems to use the binary output. When you deal with UTF-8 you should take care that the data does not get's mangeled. That's prevented by handling the string data as binary data.
However: If it's not like you told in your question that the UTF-8 encoding of the data is preserved, than your encoding got broken and even binary safe handling will keep the broken status. However, with the binary flag you still ensure that this is not the fwrite() part of your application that is breaking things.
It has been rightfully written in another answer here, that you do not know the encoding if you have data only. However, you can validate data if it validates UTF-8 encoding or not, so giving you at least some chance to check the encoding. A function in PHP which does this I've posted in a UTF-8 releated question so it might be of use for you if you need to debug things: Answer to: SimpleXML and Chinese look for can_be_valid_utf8_statemachine, that's the name of the function.
//add BOM to fix UTF-8 in Excel
fputs($fp, $bom =( chr(0xEF) . chr(0xBB) . chr(0xBF) ));
I find this piece works for me :)
The problem is your data is double encoded. I assume your original text is something like:
Don’t do anything
with ’, i.e., not the straight apostrophe, but the right single quotation mark.
If you write a PHP script with this content and encoded in UTF-8:
<?php
//File in UTF-8
echo utf8_encode("Don’t"); //this will double encode
You will get something similar to your output.
$handle = fopen($file,"w");
fwrite($handle, pack("CCC",0xef,0xbb,0xbf));
fwrite($handle,$file);
fclose($handle);
I know all my data is in UTF8 - wrong.
Encoding it's not the format of a file. So, check charset in headers of the page, where you taking data from:
header("Content-type: text/html; charset=utf-8;");
And check if data really in multi-byte encoding:
if (strlen($data)==mb_strlen($data, 'UTF-8')) print 'not UTF-8';
else print 'utf-8';
There is some reason:
first you get information from database it is not utf-8.
if you sure that was true use this ,I always use this and it work :
$file= fopen('../logs/logs.txt','a');
fwrite($file,PHP_EOL."_____________________output_____________________".PHP_EOL);
fwrite($file,print_r($value,true));
The only thing I had to do is add a UTF8 BOM to the CSV, the data was correct but the file reader (external application) couldn't read the file properly without the BOM
Try this simple method that is more useful and add to the top of the page before tag <body> :
<head>
<meta charset="utf-8">
</head>