PHP un-editable pdf/file export option - php

I am developing an application in the Kohana PHP framework that assesses performance. The end result of the process is a webpage listing the overall scoring and a color coded list of divs and results.
The original idea was to have the option to save this as a non-editable PDF file and email the user. After further research I have found this to be non as straight forward as I hoped.
The best solution seemed to be installing the unix application wkhtmltopdf but as the destination is shared hosting I am unable to install this on the server.
My question is, what's the best option to save a non editable review of the assessment to the user?
Thank you for help with this.

I guess the only way to generate a snapshot, or review how you call it, is by storing it on the server side and only grant access via a read only protocol. So basically by offering it as a 'web page'.
Still everyone can save and modify the markup. But that is the case for every file you generate, regardless of the type of file. Ok, maybe except DRM infected files. But you don't want to do that, trust me.
Oh, and you could also print the files. Printouts are pretty hard to be edited. Though even that is not impossible...

I found a PHP version that is pre-built as a Kohana Module - github.com/ryross/pdfview

Related

work as a group on webapp using cvs

Me and my friend are in different countries have been developing a LAMP web app for several weeks. All these times we have been sharing source code over ftp. In this way php files become messy. I have heard about CVS, and have been reading about it. But I still cannot figure out how it works exactly.
How does the CVS could help me in this matter ?
I would be much appreciated for someone who point me in the right direction.
Ok here comes a very simple explanation of VCS. After using it for a while you'll laugh at the explanation but for now I guess this should help you.
What are the problems of your current ftp file sharing?
If 2 people upload the same file one of the files will get overwritten
After uploading it you'll only see who changed the file (the last time) but not where it got changed
You can't provide information about the changes (despite putting comments in the files itself)
You can't go back in time, once uploaded old files are lost
With version control you can solve these problems:
Files get either merged into one new file, or get overwritten but the old file will still be stored to roll back if needed
You can see who made which changes when
You can provide comments when you "upload" your files about what got changed (without storing these comments inside files)
You can always go back in time and restore old "uploads"/changes
You can also create small side projects by branching. This basically let's you split your project in smaller pieces and work on them separately.
So at the beginning of your work you usually get your local sources up-to-date by getting all the changes that got made. Then you do your work and afterwards you update the online version with your changes so that other developers can pull these changes and continue to work on them or integrate these changes into their current changes.
How to implement this sorcery?
You could google for "how to implement git" or "how to implement svn" but I would recommend you to use an online service as a beginner. Here is a list of services: https://git.wiki.kernel.org/index.php/GitHosting
My personal preference for closed source projects with a low number of developers is https://bitbucket.org/. You get a small wiki page and bug tracking tool provided with some of the services. If you want to use bitbucket, here is the very easy to understand documentation: https://confluence.atlassian.com/display/BITBUCKET/Bitbucket+101
Important to know:
Soon you'll learn that you don't upload files as I've written multiple times but rather change lines of code. You also don't upload them you "commit" them.
While cvs could help, not many developers will recommend using it for new projects. It has largely been replaced with Subversion (svn), but even that is falling out of favour. Many projects these days use distributed version control with git or Mercurial (hg).
A good introduction to git can be found in the free online book Pro Git.
In any case, these things are all version control systems. They help to synchronize the code between developers, and also let you track
who changed code,
when it was changed,
why it was changed, and
how it was changed.
This is very important on projects with multiple developers, but there is value in using such a system even when working on your own.

How can I have my CMS upgrade itself?

I've built a CMS (using the Codeigniter PHP framework) that we use for all our clients. I'm constantly tweaking it, and it gets hard to keep track of which clients have which version. We really want everyone to always have the latest version.
I've written it in a way so that updates and upgrades generally only involve uploading the new version via FTP, and deleting the old one - I just don't touch the /uploads or /themes directories (everything specific to the site is either there or in the database). Everything is a module, and each module has it's own version number (as well as the core CMS), as well as an install and uninstall script for each version, but I have to manually FTP the files first, then run the module's install script from the control panel. I wrote and will continue to write everything personally, so I have complete control over the code.
What I'd like is to be able to upgrade the core CMS and individual modules from the control panel of the CMS itself. This is a "CMS for Dummies", so asking people to FTP or do anything remotely technical is out of the question. I'm envisioning something like a message popping up on login, or in the list of installed modules, like "New version available".
I'm confident that I can sort out most of the technical details once I get this going, but I'm not sure which direction to take. I can think of ways to attempt this with cURL (to authenticate and pull source files from somewhere on our server) and PHP's native filesystem functions like unlink(), file_put_contents(), etc. to preform the actual updates to files or stuff the "old" CMS in a backup directory and set up the new one, but even as I'm writing this post - it sounds like a recipe for disaster.
I don't use git/github or anything, but I have the feeling something like that could help? How should (or shouldn't) I approach this?
Theres a bunch of ways to do this but the least complicated is just to have Git installedo n your client servers and set up a cron job that runs a git pull origin master every now and then. If your application uses Migrations it should be easy as hell to do.
You can do this as it sounds like you are in full control of your clients. For something like PyroCMS or PancakeApp that doesn't work because anyone can have it on any server and we have to be a little smarter. We just download a ZIP which contains all changed files and a list of deleted files, which means the file system is updated nicely.
We have a list of installations which we can ping with a HTTP request so the system knows to run the download, or the click can hit "Upgrade" when they log in.
You can use Git from your CMS: Glip. The cron would be a url on your own system, without installing Git.
#Obsidian Wouldn't a DNS poisoning attack also compromise most methods being mentioned in this thread?
Additionally SSH could be compromised by a man in the middle attack as well.
While total paranoia is a good thing when dealing with security, Wordpress being a GPL codebase would make it easy to detect an unauthorized code change in your code if such an attack did occur, so resolution would be easy.
SSH and Git does sound like a good solution, but what is the intended audience?
Have you taken a look at how WordPress does it?
That would seem to do what you want.
Check this page for a description of how it works.
http://tech.ipstenu.org/2011/how-the-wordpress-upgrade-works/

Convert webpage (html) to thumbnail/preview for output via PHP file

So i've done a lot of searching on here and Google and haven't seem to come across anything that is known to work or that I think will accomplish what i'm looking to do.
Basically right now i've been using www.thumboo.com to create thumbnails via their API however they do not support SSL and where im creating the preview is under a customer area where SSL is required.
So i'm looking to either develop something myself or find something already developed to use, i would like to create a simple "screenshot" or "thumbnail" of a website address on the fly, not sure if i want to cache it yet or not, but either way doesn't matter.
Does anybody know of any scripts out there that can accomplish this? I'm not looking to get a screenshot of the "entire" page, just what a "browser" would originally see without scrolling down, just like how it works on www.thumboo.com.
I'm not too concerned with the scripting language but i plan on outputting the file using php by pulling the file from somewhere or activating the script with java or php.
Does anybody know of any other thumbnail services that may have an API That works with SSL or any scripts that are still developed for this purpose? Everything i have found has been outdated which makes me wonder if there is some easy way to do it now with some type of function or module i may need to add to PHP.
I am server admin so i can customize PHP and the server as i need to get it to work.
Thanks ahead of time!
http://www.thumbalizr.com/apitools.php
i have never use but seems like this works for you.

Is there a native PHP class/library that supports fill_form with xfdf?

First, I have researched this on SO and Google already. In fact, based on this, I'm pretty skeptical that what I need exists. I'm almost at the point where I might give up the next 6 months of my life to write something from scratch, since it seems obvious that everyone wants a server-side form filler native to PHP.
Okay, sorry for the rant...
Here is my situation:
1.I need to deliver pre-filled PDF forms to my users.
Because the form is behind cookie-based authentication, Adobe Reader won't open the xfdf, but instead passes the task along to the browser. This is an issue for Linux users and other users who don't use Adobe Reader as their PDF reader.
Oh, and Adobe hasn't written a 64-bit plugin for Snow Leopard yet, so a third of my users have to change Safari to run in 32 bit mode every time they print this form.
Given the above, delivery of the PDF already filled in so that it can be printed in Preview, Foxit, etc, is steadily becoming the most obvious solution.
2. I can't use pdftk
This is a bit silly, but since pdftk is ancient and requires gcj to compile the outdated version of itext that it uses, I can't install pdftk on my host machine. The server doesn't have gcj, and I'd rather avoid requesting that it be installed for this one case.
Also, even if I could install pdftk, I can only do passthru() and other command-line operations via CGI, which I'd also like to avoid.
3. My host currently does not have PDFLib installed,
so I can't use the PDF extension in php. Not that it offers this feature, but I thought maybe it could be used to add the fdf dictionary to the generic form, the same way that itext/pdftk does.
I thought all was lost until I learned about TCPDF and FPDF. It looks like TCPDF has the stronger track record, and more features, but I can't find anything on Google or their documentation about server-side form filling.
If this isn't already obvious, I don't need a library for generating FDFs or XFDFs. I already have that down. But it is proving not to be enough for my users who simply want the combined product.
So I guess my questions are:
Is there a pre-built method for outputting a new PDF that is the generic form with the data from an XFDF filled in?
If not, is there a work around that doesn't entail simply writing in the values on top of the form fields (as opposed to within the fields)? Writing the values on top of the fields means that the javascript won't validate the values and that I'd need to manually change the stream values every time.
If no to both, is there a port of pdftk I haven't found yet that works with PHP and doesn't simply call the binary via command line?
Any help with this is greatly appreciated. If anyone wants to volunteer to help me make such a library, let me know, I'm already hard at work trying to learn PDF syntax, just in case.
TCPDF allows delivering forms pre-filled. Please checkout their example at - http://www.tcpdf.org/examples/example_014.phps , especially the date field and radio buttons in it.

Save full webpage

I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.

Categories