Specification-based Test Design Techniques for Enhancing Unit Tests Part 1

Written By Anton Angelov

The primary goal of most developers is usually achieving 100% code coverage if they write any unit tests at all. In this test design how-to article, I am going to show you how to use specification-based test design techniques to cover more requirements through your unit tests.

I've seen a lot of unit tests, and of the ones that were developed by programmers, most didn't cover the requirements entirely. Consider for a second how you write your tests. Do you extract the test inputs from the application's specification documents? If not, you should be! At the end of this article, you will know how to provide an approach to design test cases based on specifications through two specific techniques: Equivalence Partitioning and Boundary Value Analysis.

Non-Specification-Based Tests

I have written a simple class to explain the ideas of the article.

The primary goal of this static utility is to return the subscription's price for one month for the Sofia's transportation lines. Within the utility, the client should submit their age. The result prices then vary based on the age.

0 < Age <= 5 – Price= 0 lv

5 < Age <= 18 – Price= 20 lv

18 < Age < 65 – Price= 40 lv

65 <= Age <= 122 – Price= 5 lv

In my opinion, most of the developers tend to write tests based on their code. They first read the specification, write their code, and then design their tests based on the code itself. They aim to achieve 100% code coverage, not 100% specification coverage. When I think about this trend, I ask myself: “Why would you write tests that will fail if they are based on a code that could already contain bugs?”

In order to achieve 100% code coverage, only seven tests are needed. For the tests examples, I'm going to use NUnit because of its handy attributes (you can see John’s productivity tool review of Telerik’s Devcraft if you want to play around with NUnit more).

Random? Really? You may be shocked, but a lot of developers tend to use this technique in their tests. The first time I saw something like the code above I facepalmed myself for at least 5 minutes. The usage of random data in your tests leads to unreliable test results. It is possible with some of the generated values the test to be green, but with others to become red.

Code-based Test Cases

[Random(min: 1, max: 5, count: 1)] then Price = 0, covers first else if.

[Random(min: 6, max: 18, count: 1)] then Price= 20, covers second else if.

[Random(min: 19, max: 64, count: 1)] then Price= 40, covers third one.

[Random(min: 65, max: 122, count: 1)] then Price= 5, covers the senior price.

AgeInput= “invalid”, validates the first exception scenario when the user pass a non-integer value.

AgeInput= “0”, covers the second defensive check.

AgeInput= “1000”, cause the test to go through the last validation check about the maximum age.

In just seven test cases, we have managed to achieve 100% code coverage. However,it’s highly likely these test cases will not catch regression bugs if someone changes one of the “<“, >”, >=”, or “<=” conditional operators, for example. Furthermore, this approach for writing tests doesn't guarantee that the code is correct. If the tests are based on buggy code, they won't help us deliver better issues-free software. This is the place where the specification-based test design techniques can aid us.

Specification-Based Tests: Based on Equivalence Partitioning

First, let me go over what specification-based testing means.

It is an approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g., tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

The primary goals of Equivalence Partitioning are to reduce the number of test cases to the necessary minimum and to select the right test cases to cover all possible scenarios.

Equivalence Partitioning Hypothesis

The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only one value from each partition for testing. The hypothesis behind this technique is that if one condition/value in a partition passes, all others will also pass. Likewise, if one condition in a partition fails, all other conditions in that partition will also fail.

It is easy to test small input ranges like 1-10, but it's hard to test ranges like 2-10000. The Equivalence Partitioning helps us follow one of the Seven Testing Principles:

Exhaustive testing is impossible: Testing everything, including all combinations of inputs and preconditions, is not possible. Instead of doing the exhaustive testing, we can use risks and priorities to focus our testing efforts. For example: In an application, on one screen there are 15 input fields, each having 5 possible values. To test all the valid combinations, you would need 30,517,578,125 (515) tests. It is highly unlikely the project timescales would allow for this number of tests. Assessing and managing risk is one of the most important activities and reasons for testing in any project.

Sometimes it can be cheaper to write 1 to 10 tests to cover set ranges like 1-10, but most of the time it is not OK to write 100,000 or millions tests for larger sets. So we can use the specification-based test design techniques to reduce the number of test cases to the necessary minimum.

If I have to write the previously-mentioned code for production and also test it, I will probably use Test Driven Development. I will then design the test scenarios based on the specification requirements.

As you can see, in my tests I'm using the NUnit TestCase attribute. Once the method is executed, seven tests are going to be performed based on the values provided through the attributes. The first value represents the ageInput; the second one is the expected price.

The test cases are derived using equivalence partitions. The number of test cases isn't increased. However, the primary difference is that the tests are based on the specification requirements, not on the code itself. Also, they were written before the code.

Equivalence-Partitioning Table Partitions Example

As you can see from the table, there are seven equivalence partitions: four valid and three invalid ones. I cover all of them with the values from the last row of the table.

Equivalence Partitioning Errors to Keep in Mind

While this technique is relatively straightforward, people do make some common errors when applying it.

  1. The different subsets cannot have any member in common. If a value is present in two partitions, you cannot define how it should behave in the different cases.

  2. None of the subsets may be empty. If you cannot select a test value from a set, it is not valuable for testing.

Specification-Based Tests: Based on Boundary Value Analysis

So what is the Boundary Value Analysis?

It is a black-box test design technique in which test cases are designed based on boundary values. But then, what are boundary values?

Boundary values are input values or output values that are on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, such as, for example, the minimum or maximum value of a range.

This is a technique to refine the equivalence partitioning. Boundary Value Analysis is the next part of Equivalence Partitioning. In it, the test cases are selected at the edges of the equivalence classes. The coverage criterion is that every boundary value, both valid and invalid, must be represented in at least one test.

The main difference is that there are at least two boundary values in each equivalence class. So we'll have about twice as many tests.

Do all equivalence classes have boundary values?

No, definitely not. Boundary Value Analysis applies only when the members of an equivalence class are ordered.

How Many Boundary Values Are There?

There are two views of how many boundary values should exist. Most people believe only two values should be derived from each edge of the equivalence partition. As such, in the following condition, 0 < Age > 6, for the first edge the boundary values are going to be 0, 1 and for the second limit 5, 6.

In his book, Software System Testing and Quality Assurance, Boris Beizer explains the other option: three values per boundary where every edge is counted as one of the test values in addition to each of its neighbours. For the previous condition, 0 < Age > 6, for the 0 the test values are going to be -1, 0, and 1. For the 6, the test values are going to be the 6 itself, 5, and 7.

In my career, I have experimented with both approaches, and I believe that using the second one, I have been able to find more bugs. Because of that I encourage you to use the Boris Beizer's technique regardless of the increase in the test cases' count.

Tests Using Boundary Value Analysis

Using the Boundary Value Analysis Specification-based Test Design Technique I created a total of 20 tests for the TransportSubscriptionCardPriceCalculator before the writing process of the actual code, only based on the specification requirements.

fieldIn order to achieve 100% boundary value analysis coverage, you need only the first 16 tests. However, I added four more tests, because sometimes even if the test values belong to common equivalence partition, it doesn't mean they will produce the same result. So, I tested the CalculateSubscriptionPrice with null, string.Empty, int.Max + 1, and int.Minimum – 1.

Boundary Values based on Requirements

  1. 0 < Age <= 5 – Left Edge: -1, 0, 1 Right Edge: 4, 5, 6

  2. 5 < Age <= 18 – Left Edge: 4, 5, 6 Right Edge: 17, 18, 19

  3. 18 < Age < 65 – Left Edge: 17, 18, 19 Right Edge: 64, 65, 66

  4. 65 <= Age <= 122 – Left Edge: 64, 65, 66 Right Edge: 121, 122, 123

Where Would You Find Boundary Values?

The boundary values of a class are often based on the specification requirements, where it is explained how the system should behave in the different use cases. However, often these values are not mentioned in any existing specification document. In such cases, if it is impossible to update the requirements, you can use test oracles.

Test Oracle: A source to determine expected results to compare with the actual results of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual's specialized knowledge, but it should not be the code.

For example, if you develop a calculator application and don't have the full specifications about how it should behave in certain cases, you can use the Microsoft Windows built-in calculator for a test oracle.


3dballsYou can use the specification-based testing design strategies to write the absolute minimum of unit tests to cover all requirements. The Equivalence Partitioning and Boundary Value Analysis can save you from the evil practice of designing your tests based on potentially buggy code, thereby producing passing but not correct tests. Use your know-how about the system, your intelligence, and intuition to try more test values, because there are no perfect test design techniques.