in Testing

Fire Your Bench Guards: Eight Bad Testing Ideas #8

This is Post 8 of 8 in the Eight Bad Testing Ideas series.

Bad testing idea: Not re-examining assumptions.

There is an old story about Stalin, who, one day in the 1940s, was walking in the gardens of the Kremlin. He happened to pass by a park bench which, oddly, had an armed guard on each side. Curious, he asked around to find out why this random park bench was being heavily guarded. The captain of security confirmed that guards had always been posted there, but he did not know why. He investigated and came back to Stalin with his findings.

Apparently, one day, decades before, the Tsar of Russia had been out for a walk with friends. Someone was just about to sit down on that bench, when, at the last second, he realized that it had just been painted. The Tsar ordered that someone be posted at the bench to make sure that nobody accidentally ruined his clothes with wet paint. And nobody had ever rescinded the order.

Well, replied Stalin, fair enough. But why were there two guards?

That one was easy, the captain replied. When the Russian Revolution broke out, the Tsar ordered the Kremlin guards doubled.

It is seldom that an entire enterprise software system is designed and built at the same time, with absolutely no subsequent changes or expansions. Inevitably, software tests will reflect the system — or the entire enterprise — at it was at one particular time. This snapshot can be either the system’s original inception, or when that portion of the test suite was developed. Because of this, the tests, in either their specific parameters or in their scope of coverage, will be based on whatever invisible assumptions that existed then — assumptions about what the system is for, who is using it, or what its most commonplace use cases will be. Later on, these can manifest as hidden biases that render tests less useful, or even meaningless.

Examples:

• “Most of our system use comes from X type of transaction, or Y type of user interaction.”

When the application was written, almost every customer was institutional and used purchase orders or requested to be invoiced later. Credit cards were an afterthought. Later on, management decided to open sales up to the general public. Many robust use cases exist for processing purchase orders — but is credit card handling as well tested as it might be?

• “We don’t need to run the Z tests that often, because nobody actually uses it.”

Once upon a time, the Z system was an experimental new feature that nobody really had a good business case for. There were some perfunctory “it doesn’t crash when you click on it” tests, but other cases were low priority and never got built out or were broken and disabled. A later change in management priorities made Z unexpectedly useful for certain customers, and it’s now used all the time. Does it have the test coverage it deserves?

• “Everything worked fine when we had 10% as many users.”

Performance tests are never needed… until they are. Do your test cases check to make sure the system can handle last year’s volume, or this year’s?

• “This application will always get its inputs from the J system, so we know what file type it can expect.”

In a fast-moving enterprise, reusable or adaptable software can be applied to purposes that were not foreseeable. The online purchase system gets quickly reskinned to be the online project quotation system. One more type of XML file gets squeezed into the records processing engine as a special case, just for this one thing, temporarily. A new customer comes on board and needs a set of custom flags that sometimes mean nonstandard outputs. Were your tests comprehensive and robust for what is now only most of the use cases?

Like with the Kremlin park bench guards, sometimes it makes sense to take a step back, look at your test suite or test plan, and say “why is this even here?” or “is this designed for what is, or what was?” You may find that your most comprehensive test coverage is now for functions or systems that are less important, or that your tests assume things about the business (or its users, or its products) that no longer reflect reality.

Suggestions:

• At some interval (ideally, quarterly), “reality check” all your individual use cases, to make sure that they are still meaningful, valid, and representative of how the software is actually used. There may be nothing wrong with the existing test cases themselves, but they may emphasize or give greater coverage to things that just don’t matter as much anymore.

• Then, look at your test suite or test plan as a whole and try to see the invisible assumptions built into it. Who does it assume your user to be, or your customer? Have transaction volumes or customer relationships changed so that the testing plan is focusing on the wrong systems or patterns of use? Even if you’re not missing anything, opportunities may exist to pare down use cases in what are now less important or less common areas of functionality, speeding up automated testing or improving manual testing efficiency. It may be wasteful to have magnificently detailed test coverage for what have become edge cases or seldom-run legacy jobs.

• If necessary and/or feasible, ask your business analysts for their sense of what are now core or marginal functions. As they are closer to the front lines of actual software system usage and future planning, they may have a better big picture perspective on what’s really important.

Write a Comment

Comment