Reasons not to add messages when you assert

Pitfalls to be aware of when enhancing JUnit test failure feedback

Tests fail. They're inconsiderate, failing at the most inconvenient of times (especially when there's an urgent production issue that you desperately need to release a hotfix for) with feedback that can often be irritatingly enigmatic.

When tests fail, they need to fail in a way that is helpful, obvious and, ideally, in a way that identifies a specific piece of logic that needs looking at.

What you don't want to see is this...

java.lang.AssertionError
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertTrue(Assert.java:52)
   ...location of failed assertion...

..the equivalent of a 'something failed here' marker.

It would be better if the failed assert provided more information. What better way to do this than by carefully crafting a suitable message and passing it to the assert method to display when it fails? Isn't that the purpose of the optional String parameter on every JUnit assert method? The assert by itself doesn't know the context and purpose of the test so it's left to the developer (who presumably does know) to provide it.

Or so the common wisdom (and PMD/SonarQube ruleset) goes.

How useful are these messages, though? Is the benefit from the extra feedback sufficient to offset the cost of adding and maintaining these plain text descriptions in a codebase?

Here are a few reasons not to add descriptions to your assertions.

1. Providing meaningful descriptions is problematic

Bad descriptions are easy to write.

They fail to provide the promised benefit of adding them in the first place and it's possible for them to negatively affect the usefulness of tests by misleading the developer with incorrect information. The effect of this misdirection can range from sending them off on a wild goose chase to track down something completely unrelated to informing them that the exact opposite of the actual assertion failure is true. This is particularly easy to do with assertTrue() / assertFalse()

assertTrue("index was not less than 5", index < 5);

which always sounds a little schizophrenic when read out loud

assert true, index was not less than 5, index is less than 5

The inconsistent failure messages from pre-JUnit 4 assertions adds to the fun of crafting suitable pieces of text that can integrate correctly with the output in order to provide meaningful feedback.

2. Adds unhelpful duplication to test code

Sometimes a little duplication in tests can be useful. But not here. In order to provide useful assertion failure feedback, the comparison being done in code has to be re-iterated.

assertTrue("jonDoe is missing from results", results.contain(jonDoe));

assertNotNull("results were null", results);

The values being asserted on occur multiple times too: as parameters to the assertion and in the description.

assertTrue("index was not less than 5", index < 5);

3. Decreases test readability

Attempting to remove the duplication by extracting the expected and actual values into variables, produces a mess of string concatenation...

int expectedLessThan = 5;
assertTrue(
  "index(" + index + ") was not less than " + expectedLessThan,
  index < expectedLessThan);

Burying simple assertions under a mass of strings and variables makes it harder for the reader to parse and figure out what the code is trying to do. Compare it to the vanilla version of the assert before effort was put in to improve the failure message:

assertTrue(index<5);

If the amount of code the reader is forced to read was related to it's importance, string construction would clearly be the purpose of these tests.

4. Keeping descriptions in sync with code is painful

Descriptions are just plain text strings. They're connected to the code, because they are descriptions of the code. Unfortunately, because they're just strings we don't get any of the nice compile-time checked safety we're so used to in Java. If characters within the string are added, removed or jumbled up, no automated process runs to check that the string still makes sense. It has to be done manually.

Modern refactoring IDE's, with their over-zealous string matching, can easily and accidentally break this connection. In IntelliJ, a huge nuber of hits occur in the find window telling you that it'll change all of those occurnces as well

Typically we'll turn this off, or just hit the delete key to remove the search result from the refactoring step or even disable the text search part of the refactoring by default in order to protect against inadvertantly messing up some text.

When it comes to assert descriptions, however, this then pushes the responsibility of correctly updating all connected or related symbols embedded in plain text onto the developer.

Constructing meaningful failure feedback for your tests is a laudable goal and writing tailored descriptions for one or two important asserts is not a problem. However, given the brittle connection of a plain text description with it's assertion, implementing this repeatedly and consistently across a test suite becomes a huge pain to maintain over time.

The name of the game is feedback

Watching a test fail as part of the TDD cycle, gives you the opportunity to judge how good the feedback is before it's seen 'for real'.

Put yourself in the shoes of someone (perhaps you, perhaps one of your teammates) that needs to fix this test in the future. Does it tell you exactly what you need to know to start your investigation? Can you see what the expected and actual values were? Would you need to burn time deciphering what the test failure means?

Bear in mind the cost of adding messages to asserts. There are simplier alternatives out there which give better failure feedback for comaratively little extra effort:

Adding a description should be the last thing to try.