phpdoc vs. phpxref - php

Does one have capabilities the other doesn't? is it a problem the neither has been updated in about 3 years? Is there something lacking from both?

I like to think they target two different goals.
I wrote PHPXref over a decade ago as a quick hack to allow me to get to grips perusing the source code of a large project quickly & easily, without needing a lot of tools (just Perl, and not even that if you're on Windows) and without needing a remote web server. The documentation part of it was a useful side effect, but really I just wanted a decent way of reading through hyperlinked source code in a browser.
PHPDocumentor and similar tools do a much better job of generating real documentation from source in a variety of formats.
PHPXref could definitely use some updates (or a rewrite), but should still be useful today - You can download it and have output with no configuration in a couple of minutes, so it's cheap to see if it suits your needs.

The de-facto standard is PHPDocumentor.
A rather old comparison of phpdoc and phpxhref can be found in this short blog post:
http://kb.ucla.edu/articles/phpxref-vs-phpdocumentor
Since both tools haven't been update that much since then, it should still reflect the facts.
Another popular documentor is doxygen.
The new kid on the block is http://github.com/theseer/phpdox

Related

What is the difference between Rephactor and Scisr PHP refactoring tool?

Rephactor and Scisr both are automated refactor tools for PHP. Both are under development and provides same refactor functions. Can anyone tell me the exact difference in both tools?
I wrote Scisr, so perhaps I can offer some insight. The short answer is that they're simply different tools. They each have a subset of functionality, and you may find that both, one, or neither offer the exact functionality you wish for. Each one offers the set of features that the developer got around to implementing - i.e. somewhat incomplete.
The longer (and slightly more biased) answer is that they are philosophically different. Luckily, both tools offer little introductory blurbs that should give you a sense of what each developer set out to create. Here is Rephactor's and here is Scisr's.
I wrote Scisr after trying all the PHP refactoring tools I could find (including rephactor) and being dissatisfied with all of them. rephactor was definitely the most useful of all the tools I tried, but I thought it made too many demands - it wanted to hook into SVN, and testing tools, and all sorts of other stuff. These were all good processes to be following, but the need to integrate meant I couldn't pick the tool up and immediately use it on my project. So I set out to write a simpler, single-purpose tool, and Scisr was born.
IIRC I also looked at rephactor's source code and found it using regular expressions and other solutions I considered inadequate. I could be misremembering, though, so please forgive any untruth in that statement.
One other consideration should be product health. Unfortunately, both projects are more-or-less inactive. Scisr is more recently updated, and I'm still accepting pull requests, but I'm not actively working on it any more. Rephactor was last updated in 2009.
2019+ answer
Looks like both mentioned packages are dead since 2013. Yet the need to instant refactoring is still here, so I'd like to share 2019 alternative that is in active development with 200 downloads/day.
In 2017 I started a small project called Rector: https://github.com/rectorphp/rector
Rector can help you with anything that you do manually
Just out the box it supports:
Rename classes, methods, properties, namespaces, constants... anything :)
Add, replace or remove arguments
Add parameter or return type declarations without docblocks - just with static analysis
Upgrade from PHP 5.3 to PHP 7.4
Migrate from one PHP framework to another
Complete PHP 7.4 property type declarations
Turn Laravel static to Dependency Injection
...
It's a CLI tool, that is easy to use:
composer require rector/rector --dev
vendor/bin/rector process src --level code-quality
Read more in README

Is phpDocumentor dead?

Or is it just at a 'finished' state? I've used PHPDoc for many years on all my PHP projects, but I recently noticed that the last post on the PHPDoc website was from 2008. So I'm wondering if it's time to look into other alternatives like Doxygen. Are there any advantages to using something other than PHPDoc?
EDIT: Interesting post on Dev Zone today when Matthew announced the release of Zend Framework 1.11.5 he wrote:
"Mike van Riel offered to convert our API documentation generation to DocBlox. We'd already been considering it for ZF2, but on seeing the flexibility of the templating system, and, more importantly for us in terms of packaging, the speed and minimal resources it utilizes in generating the output, we were sold. (API documentation generation time was reduced from taking 80-100 minutes to less than 10.) You can view the results for yourself." http://devzone.zend.com/article/13643
This is why I'm concerned, if large projects like Zend Framework are dropping phpDoc, it seems to me the inactivity of phpDoc is not going unnoticed. 100 minutes down to 10..that's what I like to hear.
#gms8994 good call
*UPDATE: So turns out DocBlox is PHPDocumentor2 in disguise/re-branded. http://www.docblox-project.org/
Although I currently use doxygen, too, I have to post that PHPDocumentor is not dead. Instead the attempts made by the DocBlox project are joined to form the basis for PHPDocumentor2 which sports a brand new website. These days I'm not convinced it's production ready but it does already look really promising.
I have recently used Doxygen for generating documentation for PHP.This is open-source tool for documentation and support other languages too.I would say this is a good tool and it easily generate documentation as well as class diagrams and have lots of configurable features.It is available for Windows as well as UNIX/LINUX
Can Find the latest release and DOWNLOAD Here
Why fix something that isn't broken? PHPDoc works great, doesn't need anything else really. They're not trying to innovate, just to help create documentation. Which they did, very well.
I had several problems with phpDocumentor. One of them was the xml export. After a few attempts to fix the code I decided to look for an alternative.
What I found and liked was: Rarangi
https://bitbucket.org/laurentj/rarangi/wiki/Home
Rarangi is a generator of documents from php source code.
The interesting thing about it was that it saves the information in a mysql db and you can make your own custom reports.
phpDocumentor v3 (with proper PHP 7 support) is currently alpha, getting real close to a stable release. The problem is that they don't have a lot of people working on it, so development is slow.
I've tested the latest alpha on one of my Symfony projects and it does the job okay, although it's missing some features and has a couple of bugs. They were all reported, of course.
https://github.com/phpDocumentor/phpDocumentor/releases
Update: phpDocumentor v3 just went beta. Check releases.

Efficiently gathering information about the inner workings of a new PHP project. Tools? Techniques? Scripts?

I am soon to join a PHP project that has been developed over the course of several years. It's going to be huge, sparsely documented, many files, piles of code, no consitent quality level is to be expected.
How would you go about gathering as much information as possible about what is going on?
Autoloading is not be expected, at
least not extensively, so
inclued might do a good job
revealing the interdependencies.
Having phpDocumentor digest the
project files might give an idea
about which classes/methods/functions
are present.
Maybe phpCallGraph for
method/function relations.
Profiling some generic use cases with
XDebug to gain an idea about the
hierarchies and concepts.
Inspecting important log-files ...
checking out warnings, deprecated
usages, errors.
phpinfo().
Maybe extracting all comments and
process them into a html-file.
Didn't cover Unit-Tests, Databases, ....
What would you do? What are your experiences with mentioned tools to get the most out of them?
You can assume any condition necessary.
What statistical information could be useful to extract?
Has somebody experience with those tools?
EDIT from "PHP Tools for quality check":
PHP Mess Detector
Copy/Paste Detector (CPD) for PHP code by Seb. Bermann
EDIT 2 from Bryan Waters' answer:
phploc - phploc is a tool for quickly measuring the size of a PHP project.
Inspecting Apache logs and Google Analytics data to find out about the top requested URLs and then analyze what happens using XDebug profiling and a tool like KCachegrind.
See his answer for concrete techniques.
Setting up a deployment / build / CI cycle for PHP projects - suggested by Pekka
EDIT 3
Just found this PDF of a talk by Gabriele Santini - "Statistical analysis of the code - Listen to your PHP code". This is like a gold mine.
I agreee that your question does have most of the answers.
This is what I would probably do.
I would probably start with Sebastian Bergman's tools, especially phploc so you can get an idea of the scope of the mess (codebase) you are looking at. It gives you class, function counts, etc not just lines of code.
Next I would look in the apache logs or google analytics and get the top 10 most requested php url's. I'd setup XDebug with profiling and run through those top 10 requests and get the files, call tree. (You can view these with a cachegrinder tool)
Finally, I'd read through the entire execution path of 1 or two of those traces, that is most representative of the whole. I'd use my Eclipse IDE but print them out and go to town with a highlighter is valid as well.
The top 10 method might fail you if there are multiple systems cobbled together. You should see quickly with Xdebug whether the top 10 are coded similarliy are if each is a unique island.
I would look at the mysql databases and try to understand what they are all for, espacially looking at table prefixes, you may have a couple of different apps stacked on top of each other. If there are large parts of the db not touched by the top 10 you need to go hunting for the subapps. If you find other sub apps run them through the xdebug profiler and then read through one of the paths that is representative of that sub app.
Now go back and look at your scope numbers from phploc and see what percentage of the codebase (probably count classes, or functions) was untouched during your review.
You should have a basic understanding of the most often run code and and idea of how many nooks and crannies and closets for skeleton storage there are.
Perhaps you can set up a continuous integration enviroment. In this enviroment you could gather all the statistics you want.
Jenkins is a fine CI server with loads of plugins and documentation.
For checking problems which could be expected (duplicate code, potential bugs...), you could use some of those tools:
https://stackoverflow.com/questions/4202311/php-tools-for-quality-check
HTH
PS. I Have to say that your question, IMO, contains already a lot of great answers.
If you're into statistical stuff, have a look at the CRAP index (Change Risk Analysis and Predictions), which measures code quality.
There are a two-part nice introductory article:
First part
Second part
Having both built, and suffered from, huge spaghetti-y legacy PHP projects, I think there is only so much you will be able to do using analysis tools. Most of them will simply tell you that the project is of terrible quality :)
Unit Testing and source code documentation tools usually need some active contribution inside the code to produce usable results. That said, they all are surely worth trying out - I'm not familiar with phpCallGraph and Sebastian Bergmann's tools. Also, phpDocumentor may be able to make some sense out of at least parts of the code. PHPXref is also a cool tool to get an overview, here's a (slow) demo.
The best way to start could be just taking a simple task that needs to be done in the code base, and fight your way through the jungle, follow includes, try to grasp the library structure, etc. until the job is done. If you're joining a team, maybe have somebody nearby you can ask for guidance.
I would concentrate on making the exploring process as convenient as possible. Some (trivial) points include:
Use an IDE that can quickly take you to function/method, class and variable definitions
Have a debugger running
Absolutely keep everything under source control and commit every change
Have an environment that allows you to easily deploy a change for testing, and as easily switch to a different branch or roll everything back altogether. Here is a related question on how to set something like that up: Setting up a deployment / build / CI cycle for PHP projects

What tool can I use to generate a PHP class usage report for my application?

I have a fairly large object-oriented php 5 project, and as part of a change impact analysis, I'd like to compile a report on the usage of each existing class throughout the project.
It would help me immensely if I could find an existing tool that will analyze all the files in my project and generate some sort of report that lists, for example, all the class names of objects instantiated for each class in the project, and allow me to at least search this easily and quickly.
Any help here would be appreciated!
Check out nWire for PHP. It analyzes your code and recognizes such associations. It is built as an interactive tool, not as a reporting tool, but, if you insist, you can still connect to its' database (it uses H2, which is SQL compatible) and use an external reporting tool.
IMO Zend has some profiling tools that do just that, Or you can extrapolate this information from their Accelerator log.
Or try this with XDEBUG
Xdebug can trace your code and create code coverage statistics. There are additional tools like Spike PHPCoverage, which can generate nicely formatted reports, but since these are intended for test-coverage, it'll just give you a boolean result (eg. line of code is used or not used). You probably want a more detailed view (eg. how many times is it used).
Another option is to use the function trace feature of Xdebug. This will give you a detailed report of the actual call graph. You can determine which files was used the most from this. You'll need to write a parser for the data manually, but that shouldn't be too hard.
Finally, you could do the same thing with a static call graph. There are some tools available for php. Here are a few:
http://www.doxygen.nl/
http://phpcallgraph.sourceforge.net/
http://www.bytekit.org/
Again, you probably need to do some additional manual parsing on the output from those tools, to get something that applies to your use case.
The clever guys at Particletree, the same people behind the functionally and aesthetically gorgeous Wufoo often publish and release their PHP toolsets and utilities, the most recent of which being their PHP Quick Profiler. As you can probably tell, I have a huge amount of respect for those guys and love the stuff that they do.
A good PHP profiler is often hard come by, and PQP is most certainly the best I've come across. That said, nearly all of the various application frameworks have some form of profiling system, humble or otherwise, but none as nearly as in-depth and helpful as PQP. However, I usually find that the framework profiling tools are more linked into the code automatically, and if you use the framework's standard libraries then you'll have to do a lot less implementation with the profiling tool (this is definitely the case with CodeIgniter). But if you want that extra bit of power and flexiblity, PQP is great.
Let me know if you find anything better - I'd love to see it!
Jamie

How common is PEAR in the real world?

I have looked at a good deal of other peoples source code and other open source PHP software, but it seems to me that almost nobody actually uses PEAR.
How common is PEAR usage out in real world usage?
I was thinking that maybe the current feeling on frameworks may be affecting its popularity.
PHP programmer culture seems to have a rampant infestation of "Not Invented Here" syndrome, where everyone appears to want to reinvent the wheel themselves.
Not to say this applies to all PHP Programmers, but them doing this apparently far too normal.
Much of the time I believe its due to lack of education, and that combined with difficulty of hosting providers providing decent PHP services.
This makes getting a workable PEAR installation so much more difficult, and its worsened by PHP's design structure not being favorable to a modular design.
( This may improve with the addition of namespaces, but have yet to see ).
The vast majority of PHP code I see in the wild is still classic amateur code interpolated with HTML, and the majority of cheap hosting that PHP users inevitably sign up for doesn't give you shell access.
In my (limited) experience, every PEAR project that was potentially interesting had major points against it:
Code is targetted at the widest audience possible. There are hacks in place all over the place to deal with old/unsupported PHP versions. New useful features are ignored if they can't be emulated on older versions, meaning you end up lagging behind the core language development.
Any given project tends to grow until it solves everyone's problem with a single simple include. When your PHP interpreter has to process all of that source code on every page hit (because the authors may not have designed it to be opcode-cache-friendly), there is a measurable overhead for processing thousands of unused lines of code.
Style was always inconsistent. It never felt like I was learning generalizable APIs like in other languages.
I used to use PEAR::DB at work. We discovered that most of our scripts spent their time inside PEAR code instead of our own code. Replacing that with a very simple wrapper around pgsql_* functions significantly reduced execution time and increased runtime safety, due to the use of real prepared statements. PEAR::DB used its own (incorrect at the time) prepared-statement logic for Postgres because the native pgsql_ functions were too new to be used everywhere.
Overall, I feel like PEAR is good as a "starter library" in many cases. It is likely to be higher quality code than any individual will produce in a short amount of time. But I would certainly not use it in a popular public-facing website (at least, not without a lot of tweaking by hand... maintaining my own fork).
Im my opinion PEAR is a good project but lacks people who want to work and keep working on it, most of the packages have inconsistent coding practices (I do not mean coding style) and there are lots of TODO's in the whole thing.
I find it useful sometimes for coding stuff I didn't know existed yet, like custom country validation functions and so on, otherwise I'm better served with any available framework out there (like CodeIgnite or Zend Framework).
The Pear library is the kinda stuff that just sits there, plugging away, with very little glory. If you are looking for something that it can do, and there's nothing more specifically targeted in the framework that you are using - go use it.
I've been working on a dating site for the last two years - and there's at least 65 pear-sourced files I've used, and are still live there today. Some, like the pager or html_Quickform will be overtaken by new code as it's updated, but for others there's just no need.
PEAR is not common, nor popular.
I tried to use PEAR so many times, but it lacks the umpphh to commit.
I prefer Zend Framework which takes the approach of 'loose' type, use only what you want.
PEAR is not common, nor popular — but it is good, and I'd recommend it to anyone.
(I do agree with Tom in that it doesn't feel like a single, unified, API; but then, this is PHP … one wouldn't like to see it getting above its station as an interpreted hack language now would one?!)

Categories