e-testing Blog

Stand up For Tester Pride

by Don Mills

Two things everyone knows about software testers:

  • They need no technical skills or knowledge. Anyone can do the job.
  • The job is to break the software.

Next year I’ll be celebrating 25 years in professional testing, quite a milestone.  Those beliefs about testing were pretty prevalent when I began, and while there’s been progress (particularly with the first), they’re still alarmingly common.  Even among testers.  And still completely wrong.

On tester training courses, I regularly ask, “Put your hand up if you are a technical person.”  Very few testers put up their hands, usually none at all, even on Advanced Test Manager or Advanced Test Analyst courses.  So what?  It’s the developers who are technical, isn’t it?

Very early in my testing consultancy career (after 20 years in software development), I was asked to create two testing courses for New Zealand’s Inland Revenue Department.  The first was to be four days aimed at end-user testers; the second was to be five days aimed at developers (a very enlightened idea then, and even now).  I produced outline course descriptions, which were accepted as just what was needed, provided I changed the descriptions of the target audiences.  I had described the developers’ course as aimed at “technical staff,” and the users’ course as for “non-technical staff”.  Bob, the training manager, gave me a short lecture on the subject, which I pass on in every testing course I run.  I now pass it on to you:

“All of our people are technical,” Bob said, “but some are more technical than others.  Revenue determination and collection is intensely technical, with thousands of technical rules that our users are expert in, but which are obscure to most lay-people.  Our programmers are lay-people, and we have to employ business analysts to translate the technicalities into lay-person’s language for them.”

I changed the course descriptions, and won the contract.  But I also learned a valuable lesson, with the aid of the Oxford English Dictionary on a bookshelf at home:

technical, adj.  Skilled in or practically conversant with some particular art or subject; possessing specialist knowledge and skills in some subject matter.

This gave me a totally new perspective.  I knew I’d been a technical person when I was a programmer.  I knew that, in re-engineering myself as a tester, I’d acquired specialist knowledge and skills that I hadn’t had as a programmer, and that other programmers still didn’t have.  But I hadn’t thought of my new role as “technical” in any way, and I certainly hadn’t thought of those stupid end-users as “technical”:

Users vs developers comicHow Users see Developers                                                  How Developers see Users

Now, software developers have a wide range of skills and knowledge, and as everyone knows, generalisations are always wrong sooner or later.  In general, though, developers’ notions of testing are pretty naïve.  Even some quite recent studies show that the prevalent attitude is, “It’s my job to write the software and prove it works”, which isn’t a mind-set attuned to actually finding bugs.

Probably this correlates with the fact that, while we see plenty of testers on our training courses, developers are mostly notable by their absence.  Perhaps testing isn’t part of their job?

If you know the difference between equivalence partition testing and boundary testing, if you can whip up a “covering set” of test cases (first defining what kind of coverage you need), if you can tell me the difference between a “false positive” and a “false negative”—and if you know and use all this professionally—then you are a technical person, for sure, and should take pride in the fact.

If you haven’t been on one of my courses yet, but will do one day, take some warning: look out for the trap questions.  Besides “Are you a technical person?”, another of my favourite traps goes like this:

“Put your hand up if you’re a tester.”

A copse of hands will go up.  (Not enough for a forest.)

“Good.  Now keep your hand up if you enjoy breaking the software.”

Mostly, the copse stays up.  Then I spring the trap (after telling everyone to put their hands down again).

“Okay, imagine you’re testing a calculator.  You type in Test Case #1: 2+2= … Anybody?”

“Four,” someone will tell me, which is a pity because,

tester lightbulb“Oh!  The calculator’s telling me it’s five.  I must have broken it!”

Most people see the fallacy straight away: I didn’t break the imaginary calculator (how could I, just by operating it correctly?)—it was already broken when it was (imaginarily) brought to me for testing.

One of the delights of having developers on a testing course (it does happen) is seeing the light-bulbs turn on above their heads at this point.  And the point is, of course, that, despite what almost everyone believes, it’s not a tester’s job to “be destructive” while the developers are busy “being creative”.  We might almost say, “It’s actually the tester’s job to find out where the developers broke it!”

Of course, this is another of those generalisations.  For one thing, modern thinking says a tester’s proper job isn’t finding bugs, but helping to prevent them.  Also, there are often a lot of people involved in developing a piece of software, and much depends on the division of labour.  Studies usually show that only around 10% of the bugs in code are programming errors; the rest are inherited from design and requirements errors.  And other studies show that around 80% of bugs derive ultimately from management decisions anyway, like using untrained staff to write requirement specifications, giving them only half the time needed to do a decent job, and providing no quality control over their output.

But that’s another story, and the important point is that if you’re a tester, at least in a Waterfall-style environment, or in an improper “pseudo-Agile” environment (and there’re quite a lot of both), then you’ve had nothing to do with the software until it came into your hands for testing.  You may be one of the few people in the project who are able to say, “I had no hand in breaking it!”

Which brings me to test cases.  Here comes another question, and as you’ll have guessed, it’s got a trap.  Which of the following calculator test cases has failed, (A) or (B)?  (It’s a different calculator, and the actual answers produced by the calculator are shown in the brackets.)

A). 2 + 2 = 4  (04)

B). 3 x 3 = 9  (10)

Most people will answer, “(B)”, because they’ve been brainwashed into believing that when a test case produces the “wrong” answer, the test case has failed.  In fact, test case (B) has done exactly what it was supposed to do (revealed a bug), and therefore it’s a successful test case.  It’s the software in the calculator that’s failed, not the test case.  Test case (A) is a different story.  If the calculator’s “+” button were wired to the “multiply” or “exponentiate” functions, it would still come up with the answer (04). That test case is a failure, because it may be concealing a bug.

So Stand Up for Tester Pride!  You are a technical person, with skills most developers don’t have.  It’s not your job to break the software, but to find out (often very creatively) where “they” broke it.  And when your test case reveals a bug, it hasn’t failed at all; everyone should pat you on the back for your success!

Next year I’ll start getting the T-shirts manufactured.  And the badges.

 

CLICK HERE FOR UPDATES

Subscribe to our RSS feed and get the latest updates in your inbox weekly

logo