e-testing Blog

Reviews in Agile Projects

By Don Mills

Reviews in Agile Projects

In ISTQB’s Advanced Test Manager Syllabus, the Introduction to Chapter 3, “Reviews”, contains the following bold statement:

When done properly, reviews are the single biggest, and most cost-effective, contributor to overall delivered quality.

Speaking personally, I’ve been a long-time enthusiast for formal reviews, particularly for inspections.  I participate in a lot of them and actually enjoy them.  (It takes all sorts.)  But is there a place for them in an Agile project?  Don’t they require a large number of participants, a large amount of time, and, above all, a formal document for you to formally review?

Those are all arguments I’ve heard (and read), and they’re all wrong-headed and based in ignorance.  The fact is that the syllabus is right: there’s nothing as cost-effective as a good review for keeping the bugs out of your software.

But how do you achieve it when you’ve nothing much to review?  I’ll come back to that question shortly.  Let’s get some facts established first.

What are reviews for?

Lots of things, potentially; but here I’m concerned with reviews as a form of testing, which means that they perform verification and validation on work products.  Work products in IT projects include plans, requirements, designs, code, and of course tests.  Whether written down or not, all of them are the product of human activity (ultimately.  Even if “automated”), and therefore all of them are subject to the “Pobody’s Nerfect” rule which is the underlying reason we have to test things in the first place.

Some decades ago, Barry Boehm taught us that, when we’re testing something, we should ask two fundamental questions: “Is this thing being built right?”, and “Is this the right thing being built?”  The first is verification, which essentially means checking that a work product is built “to spec”.  But specifications aren’t necessarily perfect (all too often, they’re the reverse), and hence the need for the validation question, which translates as, “Is this work product actually fit for use?”

Who should do reviews?

In principle, anyone can compare a document against a standard, say, and verify whether the document conforms to its own “specification”.  Anyone able to read a design specification could look at one and ask, “Where’s your design for Function X?  It was in the requirements, but I don’t see it here!”  All you need is a modicum of technical knowledge (see sidebar) that spans the gap between the product document and the source document(s).  (So most testers, for example, would be lost if asked to verify source code against a design specification).

Technical knowledge
Despite a common belief in the IT world, testers are “technical people”.  So are end users.  “Technical” doesn’t mean “able to write code”—it means “having specialist knowledge and skills”.  If you’re a tester, don’t let the developers tell you otherwise.  If you’re a developer, learn to respect the technical skills and knowledge of your testers and end users.

The best people to validate whether a product is fit for use are the people who have to use it.  This is illustrated in the “Verification and Validation” illustration below, where all the reviewers are asking themselves the same question as the tester: “How easily could I use this Software Requirements Spec. to do my job?”

On the face of it, the illustration shows a non-agile environment with a traditional “Business Requirements Specification” and traditional “Software Requirements Specification”.  But the same principles apply in the world of user stories and Agile documentation: verify each product backwards against its specifications and inputs, and validate it forwards against its intended uses.

How many people do we need?

Good question.  What do we want to achieve?  V&V, yes, but not at just any price!  We need a cost-effective balance.

Verification and Validation

If it’s a peer review, then all of the participants will be on (very) roughly the same level of pay.  As we add more people to a review team, the cost goes up as a (roughly) straight line.  Also as we add more people, the number of issues they’ll find between them goes up, but it’s subject to the Law of Diminishing Returns, because there’ll be more and more overlap between their issues lists.  Using people from different disciplines, who’ll be looking for different kinds of issue (as in the diagram) will reduce this effect.  Even then, as repeated experiments have shown, fewer people than five means you’ll probably miss some important issues.  More people than seven means that any extra issues they find may not be worth the cost of finding them.  So the best balance between cost and effectiveness is around 5 to 7 people, including the Author of the work product being reviewed—which, coincidentally, is about the size of many an Agile team.

Pairing is a popular (almost mandatory) approach to quality control in Agile projects, and for sure, it’s way better than nothing.  What it means, though, is that essentially only one person at a time (other than the Author) is responsible for QC.  Those legendary experiments I mentioned show that a single (trained and experienced) reviewer won’t find more than about one bug in three, working on their own, whereas a team of about six people may find as many as nine bugs out of ten, after some practice.

How long does it take?

In this case, that really is a “How long is a piece of string?” question.  Like the answer to the question, “How much testing is enough?”, it all depends on balancing the degree of risk against constraints of time, people, skills, materials, and other resource.

If you were inspecting a traditional requirements specification for an “average”-risk product, the recommended rate of work would be 300 words per hour.  Perhaps half an A4 page.  Per participant.  Per hour.

High-risk situations (products for life-, safety-, and mission-critical environments) would go slower—say, 100 words per hour.  Low-risk situations might go faster, up to say 1,000 words per hour.  Forty years of study and experiment with inspections have confirmed that these are cost-effective rates of work: they make inspections go slower, but can as much as triple the rate of all the development and testing work that depends on them, because everything is so much clearer and bug-free.

But Agile projects don’t have “traditional” requirements specifications.  How “big” is a user story?  I don’t know of any research that’ll answer that question on an empirical basis, but 25 to 100 words seems to hit the extremes, so let’s say around 50 words on average.  To “inspect” that at the “300 word” rate would require 10 minutes per user story.

Is that the “right” speed for an Agile environment?  Nobody knows (as far as I know), and anyway it might not be the best speed for your projects.  The best way to find out would be the way a new team finds out its velocity—experiment!

What can we review, in an Agile project?

Agile projects don’t apparently “do” reviews, except for retrospectives at the end of a sprint or a release cycle.

So what’s happening during Release Planning and Iteration Planning, when the team members are seeking to understand the user stories and their implications for designing, coding, and testing the software?  Even more, what’s happening in a User Story Workshop, if your team runs them?

What should be happening is that every team member is wearing his or her Critical Thinking Cap (the one all testers get given on a really good training course), and testing what they read and what they’re told.  Is this user story consistent with what we heard/read before?  If not, why not?  Is there enough information here for me to be able to design/write/test exactly the software that’s needed?  What else do I need to know?

This doesn’t just go for the user stories, but for the acceptance criteria; the ATDD and/or BDD examples (test cases); the unit test cases; and (for those who can read it) the code.  Not to mention, for those following AMDD and similar practices, formal or semi-formal software design models.

In traditional development environments, the trick is to look at everything with a critical eye.  In Agile development, the trick is to look and listen to everything with a Critical Thinking Cap on.

Getting the requirements right

In 1969, I travelled to New Zealand to take a position as a software support technician with a computer manufacturer.  Amongst the few things I took with me was an article with the title, “Getting the Requirements Right”.  Besides plans of various sorts (and there’s a whole breeding-farm of potential bugs right there), software projects generate five broad types of deliverable: requirements, designs, code, tests, and user documentation.  But the article claimed that 55% of all software bugs delivered with software, and found by the users, originate in requirements specifications.  Almost fifty years later, studies still show the same general ratio: between 45% and 75% of the bugs delivered at the end of the project are there in the requirements at the start.  Agile code reviews are a popular topic for papers and articles, but it’s “Getting the Requirements Right” that I’ll concentrate on for the remainder of this article.

Wisely, Agile projects practice a “test-first” philosophy based on the old engineering principle, “It’s not safe to build it before you know how to test it.”  The tester’s job is to demonstrate clearly, unambiguously, and cost-effectively whether a product meets its user’s requirements—not just its specification.

Requirements versus specifications
For many people, these words mean the same thing.  But there’s a vital difference.  Requirements are what’s needed.  Specifications are what’s asked for.  Substantial differences between the two account for most software bugs.  In Agile projects, the Conversation activity of the 3C user story process (Card, Conversation, Confirmation) is supposed to help you bridge the gap between the two.

The point here is that everything testers need to know about requirements in order to test whether a product will satisfy its users, is stuff the developers need to know in order to build such a product.  If the information available isn’t adequate for testers to do their job, it isn’t adequate for developers to do theirs.  Both disciplines need to cooperate with one another and the customer representatives on this.  But it helps to have some tools for the job.  Enough theory: here’s the practical part of this essay.

Tooling up: The Elephant’s Child

Formal inspection provides lots of formal tools, but the most important one for our purposes is checklists.  A checklist of things to look for is a great aid to finding whether they’re there or not, particularly if you can keep it in your head!

This short checklist is based on the closing poem in Rudyard Kipling’s story, The Elephant’s Child (look it up), with a small amendment.  For a complete picture of a requirement, we need to know:

  • WHAT the needed capability is
  • WHO needs it (what types of user)
  • WHY it’s needed (what are the immediate and ultimate objectives)
  • WHEN the capability is needed (both triggering conditions, and delivery requirements)
  • WHERE it has to operate (context of use)
  • HOW WELL it has to work (the non-functional requirements)

While reviewing a user story, individually or collectively, or listening to a Product Owner’s explanation of a user story, stay alert for whether these aspects are specified or not, and how clearly they’re specified (more on that below).

HOW MUCH” could be added to the list as an important consideration in various ways: how much risk, how much effort, how much will it cost, how much resource is available, including funding—and how much does the customer need or want it!

But missing from the list is “HOW will it work?”  One of the biggest mistakes when building a new product (and often when modifying an existing one) is to specify details of the product design as customer requirements.  Sometimes this is appropriate, but what the requirements conversation should mostly concentrate on is who needs to achieve what outcomes with what inputs—not the “mechanism” that sits between inputs and outputs.  That’s what the software development process is for.

Tooling up: the SCUTA heuristic

“Keeping things simple” is one of the founding principles of Agile development, and one of the first rules of writing specifications.  It’s also the first element of the SCUTA heuristic, a simple set of guidelines for writing good-quality technical documents (such as requirements), which can also be used to evaluate the quality of documents that have been written.  Here are the SCUTA “good practice” rules of thumb (heuristics); they all apply to each of the “Elephant’s Child” rules:

  • Simple: Complex descriptions are difficult to follow and error-prone to read, besides other possible problems.  A “complex” requirement usually can be (and should be) broken down into a set of simpler requirements, as epics may be broken down into themes, themes into user stories, user stories into acceptance criteria, and acceptance criteria into behavioural examples, a.k.a. test cases.
  • Consistent: Related requirements should be consistent with one another—for example, fine-grained user stories that all derive from the same ancestral theme or epic.  “Descendant” work products should be consistent with their “ancestral” work products (a verification issue, of course)—for example, design or test requirements that are derived from a given user story.  If any inconsistencies are noted, they need to be investigated and resolved (which version is “right”?), and the inconsistency repaired in the documentation.
  • Unambiguous: Each of the “Elephant’s Child” aspects needs to be expressed, sooner or later, in such a way that any member of the team will get the same understanding as any other.  This requires precise language (no woolly statements such as “if applicable”, for example), and where possible, precise measurements for non-functional properties such as capacity and response times.  And the need for clarity extends beyond the present needs of the team: the software may need maintenance in the future, and if things turn sour, it may be necessary to defend the software—and the development and testing processes—in court.
  • Testable: In a way, this rule sums up all the others.  Whoever tests the implementation of a requirement, whether at the unit level or at the application level, must be able to demonstrate clearly, unambiguously, and cost-effectively whether the requirement is met, in all its aspects.  It’s impossible to prove that an unclear or ambiguous requirement has been met (only that it might have been met), but it’s also important to be sure that the cost of the proof—in money, time, or effort—won’t exceed project constraints on testing.
  • Appropriate: This is a form of consistency.  If we understand the differences between requirements, design, code, and test cases, we should be sure that what we specify in each is appropriate to (consistent with) the purpose of the thing we’re defining.  Design ideas, for example, are often annotated against user stories, but this should be done in a way that makes it clear that they are part of the developers’ design, and not part of the user/customer requirements.  What we say about the software, and what we write about it, should be appropriate to the context of understanding and using the information.

Scuta

SCUTA is Latin for “shields”.  The SCUTA rules help shield you against creating, or using, “buggy” work products.

With the SCUTA Rule Set in your head, you will be ready to contribute actively to helping define clear, implementable, testable user stories.  Apply it to what you write, apply it against what you read; apply it to what you say, and apply it against what you hear.  Apply it to find issues when discussing or reviewing requirements, designs, tests, or code; but apply it to help prevent issues in such a context.

An issue is a potential defect.  A defect, when developing any kind of work product, is a violation of a rule of Good Practice for such a work product.  Sooner or later, it’ll cause the work product—document, oral statement, or executable software product—to fail to satisfy some legitimate need of whoever’s trying to work with it.

Other resources might give you ideas for expanding the SCUTA heuristic rule set (and the fun of finding a new acronym for your expanded set).  One additional rule often mentioned is Complete (giving us “SCUTAC”, perhaps?).  This one has fish-hooks because of the possibility of “unknown unknowns”; how can you ever be certain there’s nothing you’ve missed?  Consequently, it’s often included under “Consistent”: something is incomplete if the sum of what it says is less than (and so inconsistent with) the sum of what’s known from other sources, even though what we know may actually be incomplete.

Conclusions

Reportedly, growing numbers of business organisations who have adopted Agile software development are reverting to writing traditional requirements documents (at least in the USA), because of the business and project risks involved in not having them.  (They are then used as the basis for writing user stories.)  Reportedly also, up-front design is re-emerging within Agile projects, because of the difficulty of retro-fitting important characteristics like performance or reliability, after the product code has been written.

If you are working in such an environment, it may be that full-blown inspection is appropriate, especially for business requirements specifications.  In the world of user stories and cards, conversations, and confirmation, however, a more Agile approach is appropriate.  This essay has addressed the potential usefulness of reviews, as well as possible contexts of use in Agile projects, and has provided two practical tools in the form of easily-memorised checklists.

Other Agile checklists exist, such as Bill Wake’s well-known INVEST heuristic for promoting, and evaluating, well-constructed user stories: they must be:

  • Mutually Independent,
  • Negotiable as ideas and circumstances evolve,
  • Valuable to the user/customer,
  • Estimable in terms of effort to implement and test them,
  • Small enough to plan/prioritise/implement/test with confidence,
  • and Testable as already discussed.

In Agile projects, the emphasis is on continued striving for excellence.  It’s more a matter of mind-set than anything else, but applying simple tools like these checklists can help ensure that what’s written about the software, and what’s said about it, is tested, verified, and validated—and corrected where it needs to be.

CLICK HERE FOR UPDATES

Subscribe to our RSS feed and get the latest updates in your inbox weekly

logo