Paradox of Unit Testing?

A quick google search shows no hits for "paradox of unit testing", but there are equivalents, so I don't claim to invent the term.  What I'm about to tell you is so patently obvious that it is galling to me that no one else doesn't see this and attempt to change it.  My "paradox of unit testing" is quite simple...the more we unit test, the worse our code quality is and the less software we sell.  Please don't construe that I am against testing or unit tests.  I'm not.  I'm against excessive testing that shows no value.  And automated testing may actually be making your product worse!  
 
Over-Reliance on Automated (Unit) Testing Tools
For the last half decade every one of my employers has tried to automate more and more of the QA work.  True, for the small, less-than-5% of the automated tests, the code is entirely bug-free.  However, a machine cannot automate and test bugs it doesn't know how to test for.  Here's a well known story that you can google if you don't believe me.  When M$ was developing the Vista release of Windows there was an internal management push to use automated testing tools.  The reports indicated all of the tests passed.  Yet, the public reception of Vista was less-than-stellar.  Users felt the interface was inconsistent and unpolished and full of bugs.  An interface becomes aesthetically pleasing when the interface has consistent patterns of behavior and look-and-feel.  How do you automate testing of that?  You don't.  QA testers will only file bugs for those issues after they use features repeatedly and note the inconsistencies.  These obvious problems (to a human) do not fit the definition of a bug by an automated testing tool.  Taken in toto, these "bugs" led users to generally feel that Vista was inferior to XP.  
 
I am not at all claiming that automated testing tools should be tossed.  I'm merely saying that an over-reliance on them can be detrimental to the general perception of your software product.  
 
Goddamn Sonar and Code Coverage Metrics!
This is a family blog but occassional blasphemy is necessary when something is too egregious.  Sonar is code quality management software.  Sonar can tell you, for instance, what Java classes have unit tests and how many code branches have ZERO tests.  It can then drill-down into your code and determine where the bugs are given the technical debt.  This is less-than-perfect, but it gives you a good feel for where your bugs may be and I'm all for that.  It's another tool in the toolbelt.  
 
The problem is the tool gets a bit cute with its management reporting capabilities.  For instance, let's say your "Code Coverage" is a measly 7% (ie, 7% of your code has identifiable unit tests).  Is that bad?  If I was management, I'd be pissed.  The fact is, you don't need to unit test EVERY line of code.  Do you need to ASSERT that an "if" statement can evaluate a binary proposition and correctly switch code paths?  I think not.  If we needed a formal junit test for every line of code our projects would be even further behind.  
 
There is another report for "Code Duplication".  It's not uncommon to see this at 25% which again freaks out management.  Management says, "I thought you said you weren't supposed to copy/paste code.  Shouldn't that be a class or method? I don't understand you developers."  Well, the fact is that sometimes duplicated code is a good thing.  It's well known that tsql scalar functions (a code reusability feature) performance horrendously.  So in that case, a little duplication is good.  
 
But my favorite report is "Not enough or too many comments."  Whoa.  Is that ever subjective?  Coders:  "Uh oh.  We're only at 26.8% API documentation.  Let's spend the next sprint getting to 75%."  Does that sound like a good use of time?  
 
The bottom line is management is always looking for metrics to improve productivity.  Tools like Sonar cause developers agita as they need to refocus on meaningful work to adding possibly unnecessary tests and code comments simply to get higher sonar scores.  Will quality improve?  Doubtful.  The only metric that management or coders should ever worry about is "software sold."  Everything else is poppycock.  
 
Wasteful unit tests that assert language constructs and invariants
 
I'm going to cover this in the next blog post with examples.  What I see is that many unit tests people write are merely asserting that an IF statement works the way the language vendor has documented it.  That adds no value.  Another example is asserting an invariant has not changed in your code.  A unit test is not the place for this.  More to come in [[Useless Unit Tests]].  
 
Management vs Developers
I hear management saying we have poor quality software.  Sometimes they'll dictate to the coder how to solve the problem..."we are a Kanban shop now.  We will follow good kanban methods and quality shall improve."  Or, usually, management will ask the coders how to improve quality.  Unfortunately, the coders usually say, "more and better testing." The answer should really be more time and resources, but we know that ain't happening.  
 
But the he solution is rarely more unit tests.  In fact, I don't even agree with the claim that software quality is poor.  The computer software you use daily is of really good quality compared to everything else in your life.  It only *seems* like software quality sucks because you are biased towards only remembering what you don't like.  Sorry if you don't agree with me.  
 

The solution to software quality problems is simple.  There is management and there are coders.  Coders want to solve software quality by adding more software...automation tools, TDD, more unit tests.  These tools are meant to prove to management that software is bug-free.  But how do you prove definitively that software is totally bug-free?  You can't.  The logic we coders use is flawed.  

Management doesn't care about any of this.  They want:

  1. more feature functionality
  2. less bugs
  3. faster to market
  4. cheaper

Management couldn't care less (or shouldn't) about TDD or dynamic logic code or more formal testing.  (But they do like the pretty Sonar graphs that they can use to show that their developers are lunk heads).  But management does understand good economics and the Law of Diminishing Returns.  If we focus on tracking down and fixing every last bug we may get better quality, but we will have lost in the marketplace.  

Good management knows that the goal is not bug-free software, the goal is software that has just few enough bugs that a sucker, er customer, will spend money on it.  Many coders don't realize this or have forgotten it.  Stop worrying about endless testing.  

What is the best way for a coder to test her code?

Given that you believe me and my contention that we test too much, what is the ONE thing we absolutely should do as developers to affect quality?  

Code reviews

There is no substitute.  I have spent 10 minutes looking at another person's code and have pulled out handfuls of bugs.  This is code that had full unit tests.  Likewise, I've had other, "junior" level people code review my work and within 10 minutes they've spotted bugs that I thought I had test coverage for.  

Always code review.