1. Introduction
Spock is a great framework for writing tests, especially regarding increasing test coverage.
In this tutorial, we’ll explore Spock’s data pipes and how to improve our line and branch code coverage by adding extra data to a data pipe. We’ll also look at what to do when our data gets too big.
2. The Subject of Our Test
Let’s start with a method that adds two numbers but with a twist. If the first or second number is 42, then return 42:
public class DataPipesSubject {
int addWithATwist(final int first, final int second) {
if (first == 42 || second == 42) {
return 42;
}
return first + second;
}
}
We want to test this method using various combinations of inputs.
Let’s see how to write and evolve a simple test to feed our inputs via a data pipe.
3. Preparing Our Data-Driven Test
Let’s create a test class with a test for a single scenario and then build on it to add data pipes:
First, let’s create our DataPipesTest class with the subject of our test:
@Title("Test various ways of using data pipes")
class DataPipesTest extends Specification {
@Subject
def dataPipesSubject = new DataPipesSubject()
// ...
}
We’ve used Spock’s @Title annotation around the class to give ourselves some extra context for upcoming tests.
We’ve also annotated the subject of our test with Spock’s @Subject annotation. Note that we should be careful to import our Subject from spock.lang rather than from javax.security.auth.
Although not strictly necessary, this syntactic sugar helps us quickly identify what’s being tested.
Now let’s create a test with our first two inputs, 1 and 2, using Spock’s given/when/then syntax:
def "given two numbers when we add them then our result is the sum of the inputs"() {
given: "some inputs"
def first = 1
def second = 2
and: "an expected result"
def expectedResult = 3
when: "we add them together"
def result = dataPipesSubject.addWithATwist(first, second)
then: "we get our expected answer"
result == expectedResult
}
To prepare our test for data pipes, let’s move our inputs from the given/and blocks into a where block:
def "given a where clause with our inputs when we add them then our result is the sum of the inputs"() {
when: "we add our inputs together"
def result = dataPipesSubject.addWithATwist(first, second)
then: "we get our expected answer"
result == expectedResult
where: "we have various inputs"
first = 1
second = 2
expectedResult = 3
}
Spock evaluates the where block and implicitly adds any variables as parameters to the test. So, Spock sees our method declaration like this:
def "given some declared method parameters when we add our inputs then those types are used"(int first, int second, int expectedResult)
Note that when we coerce our data into a specific type, we declare the type and variable as a method parameter.
Since our test is very simple, let’s condense the when and then blocks into a single expect block:
def "given an expect block to simplify our test when we add our inputs then our result is the sum of the two numbers"() {
expect: "our addition to get the right result"
dataPipesSubject.addWithATwist(first, second) == expectedResult
where: "we have various inputs"
first = 1
second = 2
expectedResult = 3
}
Now that we’ve simplified our test, we’re ready to add our first data pipe.
4. What Are Data Pipes?
Data pipes in Spock are a way of feeding different combinations of data into our tests. This helps to keep our test code readable when we have more than one scenario to consider.
Pipes can be any Iterable – we can even create our own if it implements the Iterable interface!
4.1. Simple Data Pipes
Since arrays are Iterable, let’s start by converting our single inputs into arrays and using data pipes ‘<<‘ to feed them into our test:
where: "we have various inputs"
first << [1]
second << [2]
expectedResult << [3]
We can add additional test cases by adding entries to each array data pipe.
So let’s add some data to our pipes for the scenarios 2 + 2 = 4 and 3 + 5 = 8:
first << [1, 2, 3]
second << [2, 2, 5]
expectedResult << [3, 4, 8]
To make our test a bit more readable, let’s combine our first and second inputs into a multi-variable array data pipe, leaving our expectedResult separate for now:
where: "we have various inputs"
[first, second] << [
[1, 2],
[2, 2],
[3, 5]
]
and: "an expected result"
expectedResult << [3, 4, 8]
Since we can refer to feeds that we’ve already defined, we could replace our expected result data pipe with the following:
expectedResult = first + second
But let’s combine it with our input pipes since the method we’re testing has some subtleties that would break a simple addition:
[first, second, expectedResult] << [
[1, 2, 3],
[2, 2, 4],
[3, 5, 8]
]
4.2. Maps and Methods
When we want more flexibility, and we’re using Spock 2.2 or later, we can feed our data using a Map as our data pipe:
where: "we have various inputs in the form of a map"
[first, second, expectedResult] << [
[
first : 1,
second: 2,
expectedResult: 3
],
[
first : 2,
second: 2,
expectedResult: 4
]
]
We can also pipe in our data from a separate method.
[first, second, expectedResult] << dataFeed()
Let’s see what our map data pipe looks like when we move it into a dataFeed method:
def dataFeed() {
[
[
first : 1,
second: 2,
expectedResult: 3
],
[
first : 2,
second: 2,
expectedResult: 4
]
]
}
Although this approach works, using multiple inputs still feels clunky. Let’s look at how Spock’s Data Tables can improve this.
5. Data Tables
Spock’s data table format takes one or more data pipes, making them more visually appealing.
Let’s rewrite the where block in our test method to use a data table instead of a collection of data pipes:
where: "we have various inputs"
first | second || expectedResult
1 | 2 || 3
2 | 2 || 4
3 | 5 || 8
So now, each row contains the inputs and expected results for a particular scenario, which makes our test scenarios much easier to read.
As a visual cue and for best practice, we’ve used double ‘||’ to separate our inputs from our expected result.
When we run our test with code coverage for these three iterations, we see that not all the lines of execution are covered. Our addWithATwist method has a special case when either input is 42:
if (first == 42 || second == 42) {
return 42;
}
So, let’s add a scenario where our first input is 42, ensuring that our code executes the line inside our if statement. Let’s also add a scenario where our second input is 42 to ensure that our tests cover all the execution branches:
42 | 10 || 42
1 | 42 || 42
So here’s our final where block with iterations that give our code line and branch coverage:
where: "we have various inputs"
first | second || expectedResult
1 | 2 || 3
2 | 2 || 4
3 | 5 || 8
42 | 10 || 42
1 | 42 || 42
When we execute these tests, our test runner renders a row for each iteration:
DataPipesTest
- use table to supply the inputs
- use table to supply the inputs [first: 1, second: 2, expectedResult: 3, #0]
- use table to supply the inputs [first: 2, second: 2, expectedResult: 4, #1]
...
6. Readability Improvements
We have a few techniques that we can use to make our tests even more readable.
6.1. Inserting Variables Into Our Method Name
When we want more expressive test executions, we can add variables to our method name.
So let’s enhance our test’s method name by inserting the column header variables from our table, prefixed with a ‘#’, and also add a scenario column:
def "given a #scenario case when we add our inputs, #first and #second, then we get our expected result: #expectedResult"() {
expect: "our addition to get the right result"
dataPipesSubject.addWithATwist(first, second) == expectedResult
where: "we have various inputs"
scenario | first | second || expectedResult
"simple" | 1 | 2 || 3
"double 2" | 2 | 2 || 4
"special case" | 42 | 10 || 42
}
Now, when we run our test, our test runner renders the output as the more expressive:
DataPipesTest
- given a #scenario case when we add our inputs, #first and #second, then we get our expected result: #expectedResult
- given a simple case when we add our inputs, 1 and 2, then we get our expected result: 3
- given a double 2 case when we add our inputs, 2 and 2, then we get our expected result: 4
...
When we use this approach but type the data pipe name incorrectly, Spock will fail the test with a message similar to this:
Error in @Unroll, could not find a matching variable for expression: myWrongVariableName
As before, we can reference our feeds in our table data using a feed we’ve already declared, even in the same row.
So, let’s add a row that references our column header variables: first and second:
scenario | first | second || expectedResult
"double 2 referenced" | 2 | first || first + second
6.2. When Table Columns Get Too Wide
Our IDEs may contain intrinsic support for Spock’s tables – we can use IntelliJ’s “format code” feature (Ctrl+Alt+L) to align the columns in the table for us! Knowing this, we can add our data quickly without worrying about the layout and format it afterward.
Sometimes, however, the length of data items in our tables causes a formatted table row to become too wide to fit on one line. Usually, that’s when we have Strings in our input.
To demonstrate this, let’s create a method that takes a String as an input and simply adds an exclamation mark:
String addExclamation(final String first) {
return first + '!';
}
Let’s now create a test with a long string as an input:
def "given long strings when our tables our too big then we can use shared or static variables to shorten the table"() {
expect: "our addition to get the right result"
dataPipesSubject.addExclamation(longString) == expectedResult
where: "we have various inputs"
longString || expectedResult
'When we have a very long string we can use a static or @Shared variable to make our tables easier to read' || 'When we have a very long string we can use a static or @Shared variable to make our tables easier to read!'
}
Now, let’s make this table more compact by replacing the string with a static or @Shared variable. Note that our table can’t use variables declared in our test – our table can only use static, @Shared, or calculated values.
So, let’s declare a static and shared variable and use those in our table instead:
static def STATIC_VARIABLE = 'When we have a very long string we can use a static variable'
@Shared
def SHARED_VARIABLE = 'When we have a very long string we can annotate our variable with @Shared'
...
scenario | longString || expectedResult
'use of static' | STATIC_VARIABLE || "$STATIC_VARIABLE!"
'use of @Shared' | SHARED_VARIABLE || "$SHARED_VARIABLE!"
Now our table is much more compact! We’ve also used Groovy’s String interpolation to expand the variables in our double-quoted strings in our expected result to show how that can help readability. Note that just using $ is enough for simple variable substitution, but for more complex cases we need to wrap our expression inside curly braces ${}.
Another way we can make a large table more readable is to split the table into multiple sections by using two or more underscores ‘__’:
where: "we have various inputs"
first | second
1 | 2
2 | 3
3 | 5
__
expectedResult | _
3 | _
5 | _
8 | _
Of course, we need to have the same number of rows across the split tables.
Spock tables must have at least two columns, but after we split our table, expectedResult would have been on its own, so we’ve added an empty ‘_’ column to meet this requirement.
6.3. Alternative Table Separators
Sometimes, we may not want to use ‘|’ as a separator. In such cases, we can use ‘;’ instead:
first ; second ;; expectedResult
1 ; 2 ;; 3
2 ; 3 ;; 5
3 ; 5 ;; 8
But we can’t mix and match both ‘|’ and ‘;’ column separators in the same table!
7. Conclusion
In this article, we learned how to use Spock’s data feeds in a where block*.* We’ve learned how data tables are a visually nicer representation of data feeds and how we can improve our test coverage by simply adding a row of data to a data table. We’ve also explored a few ways of making our data more readable, especially when dealing with large data values or when our tables get too big.
As usual, the source for this article can be found over on GitHub.