Welcome to the QA Tech-Tips blog!

Some see things as they are, and ask "Why?"   I dream things that never were, and ask "Why Not".  
Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.  
Old Chinese Proverb

Sunday, July 28, 2013

The Uncertainty Principle
(as it applies to QA)


In 1927 Dr. Werner Heisenberg first wrote of what has become known as his "Uncertainty Principle".  Though a thorough understanding of this principle involves boatloads of mathematics and quantum physics, it can be expressed, (in a very simplified form), like this:

It is not possible to know both where a particle is, and what the particle is doing at the same time.

This had profound implications for particle physics back in the 1920's when it was first stated, and it is still very much a part of atomic physics today.  In fact, there are certain properties of very interesting things - superconductors, for example - that cannot be expressed or discovered without using the Uncertainty Principle.



The idea for this article came while I was thinking about writing another article.  The thought path for that article lead me to "garage sales", and from there to Heisenberg's Uncertainty Principle.  How thinking about garage sales lead me to quantum physics is a whole 'nother story, but when I got thinking about it, it struck me as something very interesting to write about.



Now what the heck does Heisenberg and his Uncertainty Principle have to do with QA?  Plenty.

In QA, especially software QA, we often forget that the observer and the observed interact, and that interaction can lead to results that may not be true in real life.  A classic example of this is the "I haven't found any bugs, (yet), so it must be ready for release" concept found so often as a project deadline approaches.  In this case, the seeming lack of defects has given the "observer", (the software tester), a potentially unreasonable confidence in the "observed", (the software product), setting the stage for a possible disaster come release time.

What should be happening is for the tester to seriously examine how applicable the test methodology might be to the object being tested.  Are assumptions being made that may not really be valid in the real world?  Are we testing deeply enough?  Have we tested enough program paths to really have that level of confidence?

In a manual test scenario, there are a whole host of ways the observer and observed can interact, and I am sure you can think of ten or twenty yourself.

What about automated testing?

Automated testing is probably the best example of observer and observed interacting to their detriment that can be found in so-called "black box" testing.  And it's not just the automated test software interacting that's the problem.

Of course, when you introduce an automated test tool into the program's logic flow, you have altered that flow - even if only infinitesimally.  When you add up enough "infinitesimally's", you have a serious impact.

What about the speed of the test, or of the test's input?  Automated tests can insert themselves into the program's messaging queue, and supply user input via trapping that queue and inserting messages into it.  The result is that input is provided virtually instantly, something a normal user cannot do.  In the real world, you have a user supplying a phone number for example.  Instead of xxx-xxx-xxxx appearing instantly, maybe the user types in the area-code, (xxx), and then stops.  Oh!  I forgot!  Let me go look it up. . .  There may be a five or ten minute delay - or longer if he gets distracted by his wife wanting something done.  What happens then?  Is the test trapping output the same way? Natch'.  Video?  Video always works, right?  Via automated testing we have no way to tell if the program's output visuals are corrupted, off-center, or unreadable.

In this case the interaction between the observer and the observed is woefully incomplete, yet the fact that "Automated Testing" passed with a clear bill of health may also lend itself to a false sense of confidence.

The canonical example of the Uncertainty Principle is White Box testing - or even "Grey Box" testing, where the "observer" has to deliberately insert instrumentation code into the product, or a similar kind of invasive testing.  Oh, but valid QA requires a good "black-box" test after the white box testing is complete, right?  And if I had a nickle for every software product that was released based on the results of the White Box test. . .  Um, we probably shouldn't go there.

The bottom line is this:  Any QA test plan should account for the interaction between the observer and the object being observed.

What say ye?

Jim (J.R.)

No comments:

Post a Comment

Thanks for sharing your thoughts here at the QA Tech-Tips Blog!

Note:
This blog will not, repeat NOT, publish comments that contain ANY KIND OF HYPERLINK.

If your comment contains ANY KIND OF HYPERLINK it WILL BE DELETED.

Please read the article at How To Get Comments Approved On This Blog for additional information regarding this rule.

Thank you for understanding.

Jim (JR)