Creating custom fonts for TCPDF library - php

I need to add some custom fonts to TCPDF library, yet after surfing internet for hours, I couldn't come up with a nice working solution.
Generally two main ways are offered to create new fonts for TCPDF library. One is using online websites doing the conversion; the other is using tcpdf_addfont.php and more specifically addTTFfont method.
With the first way, there is a great stumbling block as the most famous website doing the job is now down: that is fonts.snm-portal. In fact, it is not down, yet does not do the previous conversion task any longer. The second website that is xml-convert just produces .php and .z files and totally ignores .ctg.z. I guess, my custom files are not being recognized without this .ctg.z extension available.
For the second way, I really couldn't do anything special, as I don't know much about terminals. I just opened the path and copied the code:
./tcpdf_addfont.php -b -t TrueTypeUnicode -f 32 -i blah.ttf
yet, nothing special happened. This code just opened up the tcpdf_addfont.php file in the IDE, and no font files were created in the fonts folder, as it is supposed to do; There is something wrong here with this command line, as when looking inside the php file, one can make sure that in case everything goes fine, there must at least be some echoing, whether to show an error, or to confirm that the files have been created. Yet, nothing shows up and instead running the code in Powershell opens up tcpdf_addfont.php in my php IDE. That's the whole story. With this introduction in mind, may you please let me know how I can get .ctg.z file for the custom fonts, whether via the first method or the second. Any help would be welcome.
THANKS in advance.

Related

How to figure out what calls my php files

Before I describe the problem, here is a basic run-down of the overall process to give you a clearer picture. Additionally, I am a novice at PHP:
I have a WordPress website that uses CPanel as its web hosting software
The WordPress website has a form (made by UFB) that has the user upload an image
The image gets directed to the upload folder (/uploads) by using image_upload.php
The image is then downloaded onto a computer, and a program is run which generates numbers about the picture(the number generator program is in python)
After the numbers are generated, it calls on report.php and template.xlsm
Report.php gets those generated numbers and then puts them into their designated places on the xlsm file
The xlsm file is then converted into a pdf, which is then emailed to the user that submitted the picture.
I inherited all of this code from someone else who wanted me to help them on this project. Here is my problem:
I don't understand how the PHP files are being called. I have python code ready to run the number generator online, however, I can't do this without figuring how the PHP files are being called.
I understand what the PHP files do, I just don't understand how they are being called. I tried doing a -grep search for both image_upload.php and report.php, but I come up empty. There aren't any other PHP files that seem to do an include(xyz.php), which is supposed to be how PHP files are called. I don't understand what calls image_upload.php to get the pictures moved into the /uploads folder. I also don't understand what calls report.php to make it run. I tried looking in functions.php, where most of the other PHP files are called, but report.php and image_upload.php aren't.
Please help me! If any clarification is needed, just comment, and I will try to provide any help I can!
An easy way to get the the calling functions (including include and require calls) from any point in your PHP scripts is to get the stacktrace:
$e = new Exception();
var_dump($e->getTraceAsString());
You can also use an logger instead of the var_dump.
Unfortunately a simple grep for requires and includes won't suffice for a large project like WordPress due to the use of autoloading:
https://www.php.net/manual/en/language.oop5.autoload.php
While this resource isn't specific to your project, and things could be setup drastically different in your project, I think the details here may provide enough hints about autoloading to get you started in the right direction to understanding things in more depth:
https://wordpress.stackexchange.com/questions/212153/using-spl-autoloading-within-wordpress-plugin

phpdoc does not update my documentation

phpDocumentor v1.4.4
Fedora 24
Command line: phpdoc -d ./docsrc -t ./output
I am running phpDocumentor on Fedora 24 and have successfully generated documentation for my project one time.
I added a docblock to a function, and ran phpdoc again. But the output has not been updated. I verified the time stamps of the files and they have been regenerated, but do not reflect the changes.
I subsequently made numerous changes, and reran phpdoc after each change, but the generated documentation does not update.
I erased all the output files, renamed the directory of the input files, in short have done all I can to persuade phpdoc to generate new documentation that reflects the changes to my php files to no avail.
It would seem that phpdoc is caching the output somewhere but I cannot find where. I searched every path on my disk containing phpdoc then searched for the word "cache" in each path but it does not occur.
I tried changing the template with the --template directive but it does not recognise this directive.
I have tried using the --force directive but it does not recognise this directive.
Can someone enlighten me?
Cheers,
Peter
This sounds like one of those times where I would just walk through the process from the beginning:
Am I modifying source in the ./docsrc directory tree? Verify by opening the source member in vi/vim/nano/some-other-editor just to be sure the source has changed.
Have I modified the source using the correct syntax? (Please post some code that shows documentation that isn't being updated)
Modify documentation in another file with a simple change and see if that simple change appears when I regenerate my documentation.
Am I explicitly --ignore-ing the file or directory I'm expecting to change? (You don't appear to be)
Do I have a phpdoc.xml or phpdoc.dist.xml file with an <ignore> directive? details
Do I have the necessary permissions to create/update files in the ./output directory?
After I've executed phpdoc -d ./docsrc -t ./output do I see the expected change when using vi/vim/nano/some-other-editor?
Is my browser caching previous versions of the documentation? (I know you've already ruled this out Peter, I'm just trying to make my answer complete)
This is EXACTLY one reason why I created PHPFUI/InstaDoc! The problem with most documentation is that it is static. While that is great for libraries that don't change, if you want to document your own code, guess what? It tends to change every day! With InstaDoc, you can see the documentation instantly on your local machine before you even check it in. InstaDoc creates the documentation when you request the page. It is hands down the fastest documentation system out there. Most documentation systems create static pages and brag about how fast they can create the documentation. But guess what? Who cares? What you want is to see the documentation of your current code base right now. Turns out it only takes a few seconds to scan through all the files of the libraries you are using. InstaDoc caches that information, so you only have a long scan (and then only seconds) the first time, or when ever you add a new library.
Once you have a library scanned, the documentation comes up instantly, since it uses PHP reflection classes to read the file and display the documentation. So that file you just modified, it is completely 100% documented. Don't like the comments, change them, refresh the page. See an issue, correct it, refresh the page. Notice something could be better? Refresh the page. Want to check out the docs on a PR? Easy, just delete the cached index and refresh the page.
InstaDoc is open source and still young. Check it out and submit comments or PR's if it does not meet your needs, but it is the future of documentation. It will also generate static files for high volume sites, but the most important feature is that it gives you an instant reflection of your just edited code, and that is what makes it awesome.

Retrieving images in the uploaded pdf document in php

I am trying to display the images in the pdf document that I uploaded to the server as hyperlinks in php so that if user clicks on them they will get the corresponding document.
Please help me ,Thanks in advance!
Use pdfimages, which comes with the open-source xpdf software package (for *nix operating systems). You'll have to call it through exec or the like, then work with the output from PHP. I am not aware of any PHP library that provides this functionality, so you're going to have to experiment.
EDIT
You mentioned that you aren't experienced with PHP... I thought I'd add that this isn't a quick-and-easy type of task, you certainly aren't going to find a bunch of tutorials around the internet for this.
To get started, you'll have to install the xpdf package on your server. There's a lot of different ways to do this depending on which OS you've got.
After that is set up, you'll be using a command line to execute a program on your server; you'll want to capture the output of that command in PHP and work with it further. So initially, you'll want to work out exactly what your command line will look like as well as what the output looks like and means - do this from command line, don't worry about the PHP part yet. In this case, your output is going to be a list of the image files extracted from a given PDF, you're command line call will look something like "pdfimages mypdf.pdf". Play around, find out what happens.
After you work out exactly what command line you need to send and what the command does, you can focus on the PHP angle. In a nutt shell, you want PHP to execute the exact command that you've already worked out. Look at the manual for exec for information on how to call a command line and get the output back. Write your script to make the correct call and show the call's output.
Next, move on to doing something with that output. I presume you'll want to somehow store the extracted images in a web-accessible place, put them in the database, show them to the user, etc. That is the very last stage after you've worked out the initial steps.
Good luck!

Download web page with images and stylesheets and (optionally) E-mailing it

I need to make snapshots of web pages programmatically using PHP and get them into a HTML E-Mail.
I tried wget --page-requisites. It downloads everything all right, but it doesn't change the HTML page's source code to point to the downloaded files rather than the on-line originals. Also, that HTML is of course a long way from being displayed properly in a HTML E-Mail.
I am interested to know whether there are ready-made solutions for this. I would already be happy with a solution that takes a HTML snapshot and changes the HTML accordingly. Being able to E-Mail it would be the icing on the cake.
I control the web pages being snapshot, so I have the possibility to adjust the content to optimize the results.
My server-side platform is PHP but with very liberal settings, I can execute things like wget and Perl scripts from within PHP. I do however not have root access and can not install additional packages or programs.
The task is to make a snapshot of a product page each time somebody places an order, so there is documentation about what the page looked like at the time.
wget has a -k (--convert-links) option, which will convert both links and references to embedded content (like images). See e.g. wget advanced use (also here).
For the email-part of your question - I'm sure you can use one of the existing libraries. For example, PHP has some PEAR package (do no remember the exact name) to handle HTML emails; I'm pretty sure both Perl and Python have something similar.
In this case, you try to do a website mirroring using wget. The simple solution is to use httrack which is a simple command-line tool. It's very powerful and configurable, try it!
The httrack website presents a GUI, but you don't need it, all is possible from the command-line (or from PHP).

Save full webpage

I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.

Categories