Many years ago, I was invited to an interview in Boston by a company who wanted to bring on additional QA staff. After the usual back-and-forth banter of the interview, the interviewer paused and asked me this question:
What is an acceptable defect?
I immediately replied "There's no such thing. It's an oxymoron, something that is inherently self-contradictory."
The interviewer persisted, wanting to know what an "acceptable defect" was. I replied that there is no such thing as an "acceptable defect" as once the defect becomes "acceptable", it ceases to be a defect. Or, as one of my favorite snarky QA quotes says: "It's not a bug, it's a feature!"
The interviewer continued to press the point; and yes, I understood that what he was asking was not about defects being "acceptable", but rather he wanted to know at what point does a QA effort say it's time to stop? When have you reached the level of "diminishing returns" and decide to release anyway, even though there may still be some bugs left to be resolved?
I mentioned this understanding to the interviewer, said that this was an entirely different question; whereupon I proceeded to answer it. However I persisted in my original position that there is not, and there should never be, such a thing as an "acceptable" defect.
Needless to say, I didn't get the job. And quite frankly, I wasn't too upset about it either. As far as I was, (and still am), concerned, the idea of an "acceptable" defect presents a warped and possibly disastrous mind-set on the part of any QA team.
Why is this a problem?
Now I can hear everyone saying that I'm being too picky and pedantic; that I'm splitting hairs. And maybe that's true, but I don't think so.
And why not?
It has always been my opinion, and my position within the greater QA community - not just "software" QA, but any kind of QA effort - that "defects" must never become "acceptable". Because once a defect is categorized as "acceptable", it ceases to be an annoyance or an irritation in the back of our minds. That nagging irritation goes away and we don't give it a second thought.
I have seen this time and time again, in hardware QA, software QA, process or manufacturing QA, or any other QA efforts I have been involved in. And it has always, invariably, been the fast lane to disaster.
I go into the consequences of this kind of complacent, devil-may-care attitude in another article I wrote called The Cost of Complacency. Go read it.
Again and again I return to the idea of defects never becoming "acceptable". It is the responsibility of the QA community to ensure that defects, even seemingly small defects, never get "lost in the sauce", so to speak.
Even if we need to push on toward an ultimate release date, these defects should always "get in our craw", or be that annoying little pebble in our shoe. They should always remind us that they are there, and we should continue be on the lookout for ways to mitigate them now, or if that's not possible, ways to avoid these issues in the future.
It's only by letting the seemingly "little things" get to us that we remain vigilant. It is only by this continuing attitude of eternal vigilance that we in the QA community earn the respect of those around us.
By doing this, by always letting the "little things" bother us, we ensure that the people we work with, and those that depend on us, never sink into that abyss of complacency that has been the graveyard of those who have not heeded this call.
What say ye?