in Testing

Ain’t Misbehavin’: Eight Bad Testing Ideas #7

This is post 7 of 8 in the Eight Bad Testing Ideas series.

A few months ago, I was trying to put together a filing cabinet from Ikea. Unfortunately, the instructions I had contained no writing whatsoever — they just had pictures of the different pieces, some of which weren’t even accurate! So how could I tell that I was doing things correctly? If I had simply looked at the instructions to make sure I followed them, I might have ended up with drawers facing the wrong way and no wheels on the bottom. I needed to be a bit more intuitive.

I needed to think about the behavior of the finished product to be certain I was doing things correctly.

The same thing is true when testing software. It is very natural to want to discuss a feature in terms of its implementation, but this is a bad idea. The tests we write should be very clear about what the application is intended to do, who the user is, and what problem they are trying to solve. This meaning gets lost if you focus on the details of how the application works.

For example, we have all written features that read something like this:

When I enter '1' in textfield 1
And I enter '2' in textfield 2
And I press the submit button
Then it says '3' on the screen

Now, I have a simple question for you: what does the above application do? Does it add the two numbers together? Maybe it multiples them and adds 1. Maybe it finds the next prime in the series (assuming you live in a universe where 1 is a prime number), or just the next whole number in the sequence. Maybe it adds 1 to the second number and ignores the first. Or subtracts the first value from the square of the second. Or always returns 3, regardless of what numbers you input.

Sure, it seems so simple to write our features with static values. We end up with a larger number of reusable portions of the tests (especially if you are using a framework like Cucumber) and it’s pretty clear about how to use the application. But it’s missing something: it never explains what the darned thing does. If you can’t read the functional specs to figure out what the application does, they are written wrong. After all, we don’t really care what happens when you enter 1 and 2 as the numbers; we care that whatever numbers are entered, the number you get back is the total number of digits in the value calculated by taking 10^(first number) + 10^(second number), henceforth known as Kelli’s Magic Number.

(Bet you didn’t figure that out on your own!)

Such examples are not simpler at all; instead they belie the complexity of the system, leaving the potential to miss test cases. And regardless of what the application actually does, you need roughly the same number of test cases, so it doesn’t really save you work. And it is certainly not easier to explain — you can’t even tell what the application does! Properly-written tests can serve as excellent documentation of the application’s features; poorly-written tests can confuse the crap out of all newcomers, some of whom will no doubt be tasked with making changes to said application!

Although it does appear that there are more reusable portions of the code (for certain test frameworks, anyway), this broaches the question of how beneficial this state of affairs will actually turn out to be. Certainly, you will write other tests that follow the format “I enter ‘X’ into some_text_field, I press the ‘Y’ button, I see ‘Z’ on the screen.” Ah, how easy life is when you can just toss new values into old features and not have to do anything else!

But what happens when we make a change? If we rename the text fields, we have to change the test — even though the functionality has not changed. If we change the text fields to buttons and remove the submit button in favor of an ajax call, the entire test will have to be rewritten — even though the functionality has not changed. And most dangerous of all, what if a new developer takes over the application, and changes the application to simply add the two numbers together? The test will continue to pass even though the functionality has changed, potentially breaking production and leading to lots of embarrassing meetings in which you will, no doubt, be asked why your automated tests didn’t catch that.

“It still returns 3…,” you will say.

And you’ll die a little on the inside.

What To Do

Always always always test the behavior and not the implementation. Try to write your tests as though you had no familiarity with the feature whatsoever:

When I enter two numbers
And I submit the values
Then I am shown Kelli's Magic Number

(Or, I am told the number of digits in the sum of the values as powers of 10.)

Now when new developers or testers come on the team, they will quickly be able to figure out what the application does. When the entire user interface changes to look like a calculator, the feature description itself will stay the same. Developers can tell quickly that they shouldn’t change what the submit button does because it’s a key component of the application… and you can tell just as quickly that you ought to use test data that is a bit less ambiguous than 1/2/3….

But best of all, despite eliminating reusability from some of your tests, you will save time in the long run by never being dragged into a production-break post-mortem to explain to the CTO that you built a functional test that does not, as such, actually test the functionality.

You’ll stop dying on the inside and instead embrace a future free of embarrassing meetings, where your test suite is praised for its reliability and everyone on your team can answer the question ,”What does this thing actually do?”

Write a Comment

Comment