I have overwritten my file( a webdocument with extention .php) with the old one, I tried to access the cached version from google and I found the one i.e,http://webcache.googleusercontent.com/search?q=cache:http://minnesotataxiandcarservice.com/testimonials.php&strip=0 but I am looking for a way to recover it, when I tried to view its source code all I can see is the html scrips only as the php part is not visible. Now I am looking for a way to retrive a whole document with original contents including the php coding. If there is any way to do so please mention it.
You can't.
Next time, you should be using a version control system, like Git
Related
phpDocumentor v1.4.4
Fedora 24
Command line: phpdoc -d ./docsrc -t ./output
I am running phpDocumentor on Fedora 24 and have successfully generated documentation for my project one time.
I added a docblock to a function, and ran phpdoc again. But the output has not been updated. I verified the time stamps of the files and they have been regenerated, but do not reflect the changes.
I subsequently made numerous changes, and reran phpdoc after each change, but the generated documentation does not update.
I erased all the output files, renamed the directory of the input files, in short have done all I can to persuade phpdoc to generate new documentation that reflects the changes to my php files to no avail.
It would seem that phpdoc is caching the output somewhere but I cannot find where. I searched every path on my disk containing phpdoc then searched for the word "cache" in each path but it does not occur.
I tried changing the template with the --template directive but it does not recognise this directive.
I have tried using the --force directive but it does not recognise this directive.
Can someone enlighten me?
Cheers,
Peter
This sounds like one of those times where I would just walk through the process from the beginning:
Am I modifying source in the ./docsrc directory tree? Verify by opening the source member in vi/vim/nano/some-other-editor just to be sure the source has changed.
Have I modified the source using the correct syntax? (Please post some code that shows documentation that isn't being updated)
Modify documentation in another file with a simple change and see if that simple change appears when I regenerate my documentation.
Am I explicitly --ignore-ing the file or directory I'm expecting to change? (You don't appear to be)
Do I have a phpdoc.xml or phpdoc.dist.xml file with an <ignore> directive? details
Do I have the necessary permissions to create/update files in the ./output directory?
After I've executed phpdoc -d ./docsrc -t ./output do I see the expected change when using vi/vim/nano/some-other-editor?
Is my browser caching previous versions of the documentation? (I know you've already ruled this out Peter, I'm just trying to make my answer complete)
This is EXACTLY one reason why I created PHPFUI/InstaDoc! The problem with most documentation is that it is static. While that is great for libraries that don't change, if you want to document your own code, guess what? It tends to change every day! With InstaDoc, you can see the documentation instantly on your local machine before you even check it in. InstaDoc creates the documentation when you request the page. It is hands down the fastest documentation system out there. Most documentation systems create static pages and brag about how fast they can create the documentation. But guess what? Who cares? What you want is to see the documentation of your current code base right now. Turns out it only takes a few seconds to scan through all the files of the libraries you are using. InstaDoc caches that information, so you only have a long scan (and then only seconds) the first time, or when ever you add a new library.
Once you have a library scanned, the documentation comes up instantly, since it uses PHP reflection classes to read the file and display the documentation. So that file you just modified, it is completely 100% documented. Don't like the comments, change them, refresh the page. See an issue, correct it, refresh the page. Notice something could be better? Refresh the page. Want to check out the docs on a PR? Easy, just delete the cached index and refresh the page.
InstaDoc is open source and still young. Check it out and submit comments or PR's if it does not meet your needs, but it is the future of documentation. It will also generate static files for high volume sites, but the most important feature is that it gives you an instant reflection of your just edited code, and that is what makes it awesome.
What I have to be able to do is copy an entire folder from a remote source to the local server executing the PHP file. I can do that fine except for one problem, PHP files. Obviously, I can't just go copying the source code of a PHP file using regular commands as they will interpret the code and give me the returned stuff. What I have to have is the code. Is there any way to do that?
Hope I'm clear enough, my problem isn't something very hard to understand, I just want to know if it's actually possible. If not, maybe someone may have an idea of an optimal way of storing the source code alongside the executable php? I was thinking simply saving it as text when I'm done developing but if there is a way to do it completely automatically then that would be much more awesome. Best case scenario, I can just copy the folder with php files and then execute it from local. I need to know if that's even possible... Worst case scenario, I have to duplicate files in order copy the text version of them to the local server and discard the php ones since th e executed files are not relevant to my program. I don't want that, but I just don't know if PHP is able to do what I want.
Edit: sorry for not specifying! I am the admin of the remote server and have total access. I can and was expecting to make a php file on the server itself. That's the kind of system I have at the moment! I zip a folder and return it when requested from my local source. My only problem is the php executing.
You cannot do that unless you:
have FTP access (or anything else that is not HTTP-based)
have access to a script on the server that is designed to return the sourcecode of a given file
use an exploit such as the ?-s bug in the CGI SAPI
So you are most likely out of luck.
Is there any tool out there which could tell the useless files in the code base?
We have a big code base (PHP, HTML, CSS, JS files) and I want to be able to remove the not needed files. Any help would be appreciated.
I'm guessing deleting files and running your phpunit tests is a none starter.
If your files are not already in a version-control system - add them. Having the files in a version control system (such as svn or git) is crucial to allow you to recover from deleting any files that you thought were not being used but you later find out were.
Then, you can delete anything you think may not be being used, and if it doesn't affect the running of your application you can conclude that the files aren't used. If adverse effects show up - you can restore them from your repository with ease.
The above is most appropriate (probably) for frontend files (css, js, images). Any files you delete that are requested will show up in your webserver error log giving you a quick reference for files that nolonger exist that you need to restore.
For your php files, that's quite a bit more tricky, How did you arrive at a position where you have php files which you aren't using? Anyway you could for example:
Use xdebug
Enable profiling
Use append mode (one profile)
Use all the functions of your application
and you would then have a profile which includes all files you loaded. Scanning the generated profile for each php file in your codebase will give you some indication of which files you didn't use.
If you are only looking for unused files, don't be tempted to use code coverage analysis - it is very intensive and not the level of detail you're asking for.
A slightly less risky way would be to log whenever a file is loaded. e.g. put this at line one of each file:
<?php file_put_contents('/some/location/fileaccess.log', __FILE__, FILE_APPEND); ?>
and simply leave your application to be used for a while (days, weeks). Thereafter just scan that log, for any file that is named - remove the above line of code. For any that are not - delete (preferably after looking for the filename in your whole sourcecode and confirming it's nowhere).
OR: you could use a shutdown function which dumps the response of get_included_files() to a log file. This would allow you to achieve the same without editing all php files in your source tree.
Caveat: Be careful deleting your php files. Whereas a missing css/js/image will probably mean your application still works, a missing php file of course will have rather more impact :).
If it is in Git why not delete the local file and then do a git rm <file name> to remove it from that branch.
Agree with everything said by #AD7six.
What you might like to try with PHP is to log the use of the files in someway (logging to flat file or database).
This technique does not have to be in place for long you can do it with an include and require_once at the top of each file.
That technique also works for javascript functions you can just print to the console each function, and then unit test your site. You can probably clean out a lot of redundant code that way.
The rest is not so easy, but version tracking is the way to go.
I want to write multiple image files to a odt file. I will be specifying a dir and the script will take it from there thru a loop. But where do i start? I have never done anything like this before!
I found this python code, which can convert html 2 python... so we can parse an html first and then call this one. But there is no reference on how to use this.
html2odt code
Atlast I found a PHP way to write odt direct! Its well documented.
http://www.odtphp.com/
I have also written a complete practical solution in php. You can upload multiple images and get the odt document generated.
The code is hosted at http://code.google.com/p/images2odt/
The first post is done here.
For anyone wanting to use the Python code will need a Python interpreter version 2.6. It might also work with version 2.7. It's mainly used in Linux but there are Windows and Mac versions as well. You will also need the files listed in the from and import statements. These files are in some of the other folders. It looks like it is a part of a much bigger Linux package. One last thing, Python scripts usually takes their arguments from a command line.
Additional info:
I looked over the setup.py file and it told me that this is an API library for open documents called odfpy. The version is 0.9.2. The link it has for the documentation is broken. A google search for odfpy came up with a place to download a more recent version (0.9.4) in a tarbell here:
http://pypi.python.org/pypi/odfpy
The documentation can be found here in an Open Office document:
https://joinup.ec.europa.eu/software/odfpy/document/api-odfpyodt
I've bumped into a problem while working at a project. I want to "crawl" certain websites of interest and save them as "full web page" including styles and images in order to build a mirror for them. It happened to me several times to bookmark a website in order to read it later and after few days the website was down because it got hacked and the owner didn't have a backup of the database.
Of course, I can read the files with php very easily with fopen("http://website.com", "r") or fsockopen() but the main target is to save the full web pages so in case it goes down, it can still be available to others like a "programming time machine" :)
Is there a way to do this without read and save each and every link on the page?
Objective-C solutions are also welcome since I'm trying to figure out more of it also.
Thanks!
You actually need to parse the html and all css files that are referenced, which is NOT easy. However a fast way to do it is to use an external tool like wget. After installing wget you could run from the command line
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://example.com/mypage.html
This will download the mypage.html and all linked css files, images and those images linked inside css.
After installing wget on your system you could use php's system() function to control programmatically wget.
NOTE: You need at least wget 1.12 to properly save images that are references through css files.
Is there a way to do this without read and save each and every link on the page?
Short answer: No.
Longer answer: if you want to save every page in a website, you're going to have to read every page in a website with something on some level.
It's probably worth looking into the Linux app wget, which may do something like what you want.
One word of warning - sites often have links out to other sites, which have links to other sites and so on. Make sure you put some kind of stop if different domain condition in your spider!
If you prefer an Objective-C solution, you could use the WebArchive class from Webkit.
It provides a public API that allows you to store whole web pages as .webarchive file. (Like Safari does when you save a webpage).
Some nice features of the webarchive format:
completely self-contained (incl. css,
scripts, images)
QuickLook support
Easy to decompose
Whatever app is going to do the work (your code, or code that you find) is going to have to do exactly that: download a page, parse it for references to external resources and links to other pages, and then download all of that stuff. That's how the web works.
But rather than doing the heavy lifting yourself, why not check out curl and wget? They're standard on most Unix-like OSes, and do pretty much exactly what you want. For that matter, your browser probably does, too, at least on a single page basis (though it'd also be harder to schedule that).
I'm not sure if you need a programming solution to 'crawl websites' or personally need to save websites for offline viewing, but if its the latter, there's a great app for Windows — Teleport Pro and SiteCrawler for Mac.
You can use IDM (internet downloader management) for downloading full webpages, there's also HTTrack.