Often times when I re-open a php document that has emoji characters inside of it, they turn into non-emoji codes. I've set preferences for new document creation to be UTF-8, and for it to apply to opening existing documents - but it hasn't helped. Anyone have a suggestion?
Related
So I am developing a new course-format, in which a picture is associated with each activity in a course, and presented visually. I created the course format, overrode the renderer etc. That worked all fine. However, the images are supposed to be custom generated and since it has to work for all existing and future, I put some additional code into the general course module form, enabling an image upload.
After admittedly some struggle on my part to get the File API working, it now all works fine. Only in my course format, there is an additional heading, under which you can upload a single image. This gets saved to the database fine, it is not in draft and it is viewable in my dataroots filedir perfectly if I follow the contenthash in the database. It even gets loaded into the form as a default fine. However, if I try to work with the image, all tests run fine (.is_valid_img()etc) and I even get offered to download a file. However, when I do it is corrupted and my file viewer says: "Critical Error: Not a png file". Needless to say it is not displayed on my actual course site.
When I look at the file in filedir, it very clearly is a png. Please, I would be thankful for any help, since I have tried alot and am at my wits end.
It sounds to me like you are getting some sort of output on the page before the PNG file is sent - that would be added to the start of the file and cause it not to work as a PNG file.
I would suggest you open the file in a hex editor and check the start of the file - it should look like https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header, so look for extra characters before that.
As for where the extra characters come from - they may be an obvious warning / error message (which should be easy to track down and fix). Alternatively, you may have some stray 'echo' statements (again, fairly easy to track down). The worst problems to find are extra characters before the opening 'php' tags of a file somewhere in your install or after the closing tag at the end of a file (which is why you should never use closing PHP tags). Finding these will come down to searching through all your customised code files to locate them.
I need generate a PDF file in format X-1a:2001 using Photoshop or InDesign, and write over it using PHP (or other language).. using a specific font (inside pdf file).. and export it as X-1a:2001 also..
It's possible? I googled but found nothing about it.
Anyone already did something like that?
Thanks.
I tried open x-1a:2001 pdf in FPDF as sourcefile.. but, when i exported, it loses x-1a:2001 format
To answer your question as literally as possible: yes, it's possible.
PDF/X-1a is not magic, it's just a very well defined subset of PDF. So, as long as the objects you add to the PDF/X-1a file are compliant to the specification (which, for example, says that all objects must be in a few well-defined color spaces such as CMYK, gray or spot color), you won't break compliancy.
Of course the second requirement is that your PDF engine (the library you end up using) does the right thing as well. It shouldn't throw away the PDF/X-1a identification in the file and it shouldn't add content that makes the file non-compliant.
By the way, don't rely on simply looking at the file's metadata to determine whether it is PDF/X-1a compliant. That metadata only says the file claims to be compliant; which has nothing to do with the file actually being compliant.
I have to create a document using user inserted data and including data from a .rtf document into a web page layout i created (HTML+CSS and PHP for scripting).
My problem is, i can't find any way to obtain the full content of the .rtf document.
Being a technical document symbols, tables, graphs and images are very often included: with the methods I've found i could obtain the text with symbols in a decent formatting but i had no luck with images.
So what i need is a way to obtain the full content of a .rtf file, possibly maintaining the document formatting, so i can display and organize it in a webpage; preferrably in pure PHP but use of js/executables via php is fine.
I've tried:
-rtf to html converters but the best i could get is clear text and symbols but no images;
using COM extension to open the .rtf in ms word and saving it as .html (i noticed that if i open up the .rtf then save it as webpage in word it creates a perfect html page) but it only changed the extension and didn't create a html page;
extracting text and image sperately: works but again being the document a technical document image placement is very important.
It's my first question here, after many research; please bear with me in case of errors.
my problem is with the pdf opened in Acrobat Reader, created with TCPDF on ZF2.
The file is created fine (except the size of the file, around 500kb), content is fine, but when trying to close the file, Acrobat prompts for saving the changes, though there is no changes. After saving the file and overwriting, the file size drops to around 40kb. So the file size is reduced over 10 times, but there is no visible change in the contents or otherwise.
Closest I got to any related answer was this thread here http://forums.planetpdf.com/save-file-prompt-when-closing_topic36.html
As I understand the issue is related to "The xref table is malformed", but my experience with pdf is not enough to understand the root of my problem. Sample file is available here https://dl.dropboxusercontent.com/u/29072870/test_pdf.pdf
Thanks in advance!
Only the first 7036 bytes of your file make up your actual pdf. Everything thereafter is some HTML code. Thus, you should check your pdf creation code, it seems to contain some HTML creation code (leftover from copy&paste? Added by the framework?), too.
The Adobe Reader shows these leading 7KB and eventually offers to save them as a repaired file encoded like the Reader prefers it (exploding those 7KB to your 40KB).
PS: I just saw that after the HTML code there additionally are about 80KB of null bytes.
It looks like you received a whole byte buffer 0x80000 (= 524288 decimally) bytes in size containing your PDF, some HTML, and some yet unused space.
problem actually not quite solved yet :)
the issue got much more strange now. on chrome everything works perfect, created pdf is solid and no additional data. whereas in firefox the output of the pdf is fine, saving the file works fine, opening the file with acrobat fine, closing produces same result in prompt for saving without any changes made. apparently there is still the portion of null bytes present in the end of the file. when using the "download as file" option in TCPDF output the result is correct, no additional data after EOF. only happens when pdf is output in the browser (firefox) and saved from there. could it be some firefox's issue? can one check the file for this kind of excess data and remove it somehow?
I have two sites I'm developing (in PHP). They are using identical code to provide an XLS export (using PEAR excel) and they are running on the same local server. To rule out a problem with the actual data in the xls, I am just outputting a file with no data for now.
When I export from site A and save the file it's reported as 'ANSI' encoded within Notepad++. This file opens correctly in Excel.
When I export from site B, the file is reported as 'UTF-8' encoded, the file won't open in Excel. If I convert the file to ANSI or UTF-8 without BOM in Textpad++, it opens just fine in Excel.
The same encoding difference is present between site A and B when I save an arbitrary page on the site, so I think it may be more fundamental than just how the Excel file is being generated (same encoding when exporting CSV/ODS formats). I've compared the http headers between site A and B during the export, they are functionally identical. Explicitly adding Charset=ISO-8859-1 to the header makes no difference. The apache virtual hosts are also functionally identical between sites. Both sites are using identical character encodings in their databases (but since I'm not exporting any data right now, this is irrelevant).
What else could be causing this which I haven't accounted for?
Thanks!
UPDATE
The excel generation is a red herring, I've removed all of that and simply outputting the download header and a test string. When saved, the file is still encoded differently between sites. The code which generates the download file seems identical when I diff the various files...
I haven't been able to repeat the problem by creating a simplified test case. When I tried, both sites output files which are saved as ANSI - I don't understand what else could be going on.
the ANSI "mode" just uses the language table you have on your system to save data; you cannot be sure the saved document will be visible to others.
the UTF-8 without BOM means utf8 but without appending some strange utf characters (2 or 3 i think at the top of file), probably causing excel a headache.
Im going always with without bom approach if im thinking i18n
Thanks for all your input into this, it's much appreciated. In the end I tracked it down, a PHP source file was being included somewhere along the way which was encoded UTF-8 rather than ANSI (Windows-1252). I don't really understand why this causes a problem though, since that PHP include doesn't output anything. Very weird and very frustrating, I hope maybe someone else finds my pain useful.