Import Tests

Sniffing out Illegal Imports

A project I am working on hit a snag recently: all of a sudden the program stopped producing correct results for a subset of the input data. I tracked the problem down and found that the culprit was a package of calculation routines I'm importing from another project. Some regression in their code fed incorrect results into my calculation, which caused my final results to be way out of whack.

Unfortunately, my acceptance/regression suites hadn't caught the error. My acceptance tests were testing just what the business users wanted to see: that the calculation engine itself generated good results when fed good data. To test that, and to get a predictable test environment, I had captured input data and expected results in spreadsheets that were fed to the calculation engine.

One table of input data captured what would be calculated in production by the imported code that eventually caused the regression. Ideally I would have fed test data through that code, but it was not possible to connect it to a different source of data: the data access code was entangled within the calculation code. So, I used the test data to test my calculation engine only and relied on manual eyeball tests to catch end-to-end regressions, which they did.

As part of hunting down the bug, I wrote a test that verified that the imported code still provided the behaviour that the rest of my required of it. My next problem was where should I put this test: should I add it to my project's test suite, or to that of the project whose code I am importing?

Eventually I decided on adding it to both suites. The code that I am importing is being written by another project that has its own set of acceptance criteria. By adding the test to their suite, they will detect when a change will regress my code. However, it's possible that their project may have to meet goals that are inconsistent with mine. In that case, they will change their acceptance tests. If I relied on their acceptance tests, I wouldn't catch the error until end-to-end testing. By keeping a copy of the test in my suite, and changing it as I need to, I define exactly what my code needs their code to do, can detect when their code no longer meets does that, and will indicate the location of the error when a regression occurs again.

Like a unit test with mock objects or an environment test, this test defines what one bit of code requires from another. However, it's scope is larger than unit tests and smaller than environment tests. I don't know how to describe it so, because it's defining/testing the functionality that one assembly needs to import from another, I've called it an "Import Test."

Copyright © 2005 Nat Pryce. Posted 2005-03-21. Share it.