Looking at tests from a different perspective.

An astronomer using a telescope

As I mentioned before, I've been writing Protest with Phil Dawes. Protest is a Python test framework and toolset for writing programmer tests and generating useful stuff from those tests. Currently the most interesting tool is the API documentation generator.

Writing API documentation by hand is a chore that offends my hard-learned distaste for duplication. When I've written tests that specify the API's behaviour I don't think I should to repeat that information in documentation. Even worse, the documentation cannot tell me when it's incorrect.

On the other hand, I really don't like Javadoc and similar tools for Python, such as Epydoc. Putting documentation into the code being documented ends up a maintenance nightmare. The code becomes hard to read and hard to navigate, lost in a mass of comments or docstrings.

So, Protest takes the same approach as Testdox and generates documentation from programmer tests that follow some straightforward conventions. I've tried to make it generate higher quality documentation than Testdox, which only creates a rudimentary overview of a test suite. The generated documentation has rich navigation links and a graphical visualisation of the module structure of the documented code. Having seen the importance of concrete examples in the Scrapheap Challenge workshop, the documentation includes the test code to show how to use every feature of the documented classes. Explanatory text can be to any example by giving the test a documentation string in reStructuredText format, keeping long comments out of the production code and in the tests where they are more useful.

A surprising side effect of the tool is how it really helps you improve your test code. I like to think I'm pretty good at writing tests. I designed the documentation generator to work with the way I like to make my tests self-explanatory. However, when I saw my test code in the documentation, rather than the IDE, I found lots of room for improvement: the test names needed tweaking to generate better titles, variables needed better names to identify the roles they played in the test, test data needed to be made self-describing, and so forth. I was pleased to find that the tool did more than just generate documentation, it gave me a fresh perspective on my test code that highlighted where it needed improvement.

Copyright © 2006 Nat Pryce. Posted 2006-03-30. Share it.