In my last post (Paradox of Unit Testing?) I mentioned that unit tests can give you a false sense of security. This is especially true if you don't know how and what to unit test. My examples here will be in pseudo-SQL, not tsqlt. The concepts are equally applicable to junit or nunit or whatever. The following is a list of useless unit tests that I see every day. This list is not exhaustive.
Having Too Many Tests
How many tests are too many and how many tests are too few? This is totally subjective. Assume the following code:
It's very simple, it takes a bit parameter and PRINTs it to the "standard output." Note that I included 6 unit tests where I am attempting to exercise what exactly happens when I pass in different things for @Value. But is any of this valuable? No. Any person with some experience will SQL Server will know how the rules of parameters for stored procedures work. These six tests add no value. They did not exercise the code and add no documentation value. But you will often see procedures with hundreds of tests...and many/most are tests like this. I think developers feel as though the more unit tests they have, the better their code will be received by their peers. Reading hundreds of tests is not useful. When I write unit tests, if I find I have too many (again "many" being subjective) I then consider breaking the logic up into multiple routines with a smaller concern.
Commutativity and Associativity Tests
This is a contrived example but I see unit tests where the net effect of the test, usually unintentional, is testing commutativity. Given the following screenshot:
You may see tests like "Assert that passing in arguments in a different ordinal order generates the same results." In the case above, that kind of test is pointless. Commutativity and associativity must be assumed by the developer of a stored procedure or java class. You shouldn't need to assert that the language holds rules of arithmetic true.
Attempting to Assert an Invariant and the Null Hypothesis
There are some big words there. An "invariant" is some element in your code that cannot or should not change. It is impossible to test an invariant that cannot or should not change using a unit test. The correct place is to assert your invariant directly in your code.
Why is it impossible to test that an invariant is invariant in a unit test? Simple logic dictates that if you could prove an invariant can vary, then it is NOT truly an invariant, and therefore the entire premise for what you are testing is flawed...your code is flawed, your tests are flawed, possibly even your architecture, framework, and design are flawed. A "varying invariant" can only be discovered "in the wild" and when that happens you realize your assumptions and code are flawed.
This is known as "null hypothesis testing." The null hypothesis states that I may not be able to prove that in every (possibly infinite) scenario my code does not break, but that does not mean it may be flawed. Instead you must prove only ONE case where my code does fail to prove it is flawed. In other words, I may not be able to always prove "correctness", but you should be capable of proving falseness once.
OK. Enough theory. Assume this piece of contrived code:
Clearly, following the principles of mathematics, the result must always be >= 1. It can be nothing else. We know this because, a priori, an absolute value function will always return a non-negative value and adding 1 to that result must therefore give me at least 1. Knowledge that is a priori is simply knowledge that does not need experiential justification. It just "is".
So, given the laws of additon and absolute value we should know that the above procedure will always return a value >= 1. Yet you may encounter developers who insist on writing unit tests to prove that. It can't be done. The result is always an invariant. It will be >= 1. Adding a unit test called something like "assert that the result set is always >= 1", first, is not testable (you would need the null hypothesis, otherwise the assumption must be true), and second, just clutters up any valuable unit tests that may exist.
Testing Language Elements (a posteriori tests)
First a definition for a posteriori...a posteriori knowledge is experiential knowledge...you need to experience the knowledge, or empirical test it, to understand it. This is the category of useless unit tests that I see the most. On the left is our AbsPlusOne procedure from earlier. I ALWAYS see unit tests that exercise what happens when the procedure is passed the wrong data type.
You hopefully already knew that if you passed a string to an int parameter that you would generate an error, such as we see above on the right. If you really knew your TSQL esoterica then you knew this would generate Msg 8114 with the EXACT text displayed in the screenshot. Most of us don't know our esoterica that well, we simply know that we should never ever pass a string to an int. How do we know that? Through experience, basically a posteriori knowledge.
So the question becomes, is this a valid unit test? Should a unit test assume the reader has no knowledge of how the language works? I say no. If you don't know TSQL, then the unit tests are way over your head. These unit tests are just noise to the serious developer who is trying to figure out what your procedure does by looking at your comments and unit tests.
Here is a more blatant example:
Here we have 5 tests for a ridiculously simple procedure. But simplicity of the procedure does not mean that unit tests are not necessary. I've written many simple procs that have a handful of tests to help document assumptions for the next developer. But in this case we have 5 tests that simply assert that "IF" switch branching works as advertised by Microsoft and Sybase in TSQL. Those tests are not helpful and just clutter up and add noise to things.