The Right Amount of Testing
If you have made a promise to a client, you need to deliver on it. If you don’t, the bad stuff happens.
So, if you and the customer came to an agreement that you‘re going to deliver certain functionalities in exchange for money, then the timely and complete delivery of those functionalities ought to be pretty high on your todo list.
If you test nothing else, the documented commitments you have made constitute the stuff you gotta test, one way or another. That's your lower bound right there.
(You folk who came here trying to figure out the absolute, bare minimum amount of testing you can get away with - don't leave just yet: Even that simple-sounding minimum is pretty hard to manage when requirements are changing)
Achieving requirement-based test coverage is a non-trivial problem when you have ‘many’ evolving requirements. When you have lots of stakeholder collaboration, short iterations and developers learning ever more about what they’re implementing, requirement churn is inevitable. In an Agile project, requirement churn is pretty much the goal.
For most organisations, even maintaining a definitive description of the deliverable is difficult enough. Very few can religiously update their test plan in perfect synchronisation with ever-changing requirements (if you can, I’d love to hear how you’re doing it)
Before we go on, let’s address that grinding sound you’ve been hearing. That’s actually the teeth of the TDD purists. Shouldn't we really be addressing our test program from the bottom up, testing every bit of code we write before we write it?
True, there is much more to testing than just requirements-based testing, but let’s rip the TDD band-aid off quickly here: Strictly speaking, the internal detail of all your wonderful code doesn’t matter. Whether you test it or not doesn't matter. From the perspective of keeping the lawyers on their chains and getting paid for your work, the thing that matters is that you kept the promises that you made your customer.
To sink the boot a little further: If you insist on testing every single element of code individually, you will take longer and cost more. Much of the effort expended in building low-level tests will become redundant: as your code base grows the low-level code is tested anyway, many times over by the higher-level code which makes use of it. You will inhibit your ability to refactor. Developers will choose to live with ugliness and code smells rather than improving things if it means that their good deeds are punished by a day's wading through broken tests.
I put it to you that “strict TDD” is good if you’re getting paid by the hour. Less so if you’re the one paying for it.
Am I saying that low-level tests are useless and that we should only consider requirements-based testing? Absolutely not. In practice, lower-level automated tests serve more purposes than just the obvious ones. They cover our butts in a more time-efficient way than high-level UI-based tests. They also help us organise our thoughts, highlight design flaws and provide a quick-and-easy way to hook a debugger into a bit of code.
And functionality isn't the only thing we deliver to a customer. We also provide quality, timely delivery, performance, usability. A rich and diverse test ecosystem is necessary to meet all those expectations.
In summary
We've established a theoretical lower bound for your test program and we've talked about the perils of too much testing. The “right amount of testing” lies somewhere in between, whatever that means for you. I make no recommendations as to how you spend your testing dollar. It can be automated, manual, UI-based whatever. You know your budget and priorities better than me.
But we have established that your agreed, documented requirements are the cornerstone of your test program. These are your promises that you have made to your customers. It is in your interest to protect your ability to deliver on your promises.