by Don Mills
There are five fundamental activities in any project intended to create and deliver some kind of product. They might have different names in different contexts, and might occur in slightly different orders, but the five activities are Planning, Requirements definition, Design, Construction/Coding, and Testing, giving us the handy acronym PRDCT.
The question is, when it’s a software project, where do the bugs crawl through the PRDCT process and into its products?
There are many facets to an answer, all depending on the ultimate truth that (as it says on a T-shirt I own) “Pobody’s Nerfect”. In other words, mistakes can be made at any point in any given PRDCT cycle. But another ultimate truth is that, when activities in a sequence depend on one another, the earlier a mistake is made, the more powerful its effects on later activities if it’s not put right straight away.
That proposition puts “Planning” squarely in the spotlight. But while it’s true that a typical Project Plan which allocates only 5% of effort to defining the problem (“the requirements”) in the first place, and gives that critical job to someone with little or no training (a typical Business Analyst)—while it’s true that such a Project Plan is going to create problems for the whole of the project, this isn’t an essay about how most bugs can be traced back to management decisions.
However true that may be.
The problem I want to tackle is that of the requirements themselves—that’s to say, the real needs for which a customer or set of end-users requests gratification in some end product. There are several types. Functional requirements (FRs) are about “what” the product should do. Non-functional requirements (NFRs) are about “how well” it should do it. Constraints such as delivery deadlines, budgets, standards, and regulations prevent the solution team from having a completely free hand in designing, building, and even testing the solution product.
A large number of studies, over several decades, have all converged on the general notion that around half of all the defects delivered to the users in a typical software application originated in the process of requirements definition: defining “the problem”.
The great Ed Dijkstra once lamented that Software Engineering was the only engineering discipline largely driven by what’s fashionable, rather than by what’s actually known to work. “Testable requirements” are a popular fashion nowadays, and the thing is, it seems that they do work.
A testable requirement is one that’s expressed so clearly, so unambiguously, so completely, that there can only be one interpretation of what’s actually required, and from which test cases could be designed which would demonstrate clearly, unambiguously, and (just as important) cost-effectively whether the requirement is met.
Such a requirement will mean the same thing to the customer, the product developers, and the testers; so that, if the test cases and the product features accurately reflect the requirement (“pobody’s nerfect”), the software based on it will pass every test case based on it, the first time the test cases are run.
Software that’s “right first time, and needs no correction” (Tom Gilb).
So what are the attributes of a “testable” requirement, and how do you know if your requirements have those attributes? Below are ten good requirements attributes, with descriptions to help you recognise them (or their lack).
A defective requirement breaks one or more of these rules. If your site uses formal reviews such as inspections, you’ll be looking for such defects in “traditional” written specifications. If your software development style is “Agile”, you should be alert for these problems in user stories, and especially in the conversations around what user stories mean, and how they should be implemented and tested.
The ten attributes are:
A requirement (or a requirements document) is complete when:
- It contains all detail that is necessary to express the customer need, and
- Sufficient detail to meet the purposes of everybody who has to use that requirement to carry out a job.
A Business Requirement, for example, may be used by the business, to verify that it expresses their need; by a software designer, for obvious purposes; by a tester, for similarly obvious purposes; and by a technical writer, who will have to create user documentation for the eventual product. Requirements reviews should include at least one person from each group that’s going to have to use the requirements to do their jobs, so they can identify the gaps that are important to them.
Studies suggest that, in traditional-style projects at least, initial requirements capture misses around 50% of the actual requirements, and expresses the ones that are captured in unclear, ambiguous, and essentially incomplete terms. They lack the necessary detail for everyone to be agreed on what they really mean.
Every problem in this area forces designers and developers, not to mention testers, to guess what the requirement really is. Being intelligent people, they often guess right, but they’re usually more likely to guess wrong. As Steve O’Connell once put it, “It’s terrifying to contemplate how much business policy gets set by software developers who can’t understand what the specification means, and have to make up their own interpretation.”
In Agile projects, “completeness” may be expressed by elaborating user stories into acceptance criteria, and acceptance criteria into test cases or examples. Look out for ways in which any of those might be incomplete!
Unknown requirements are very dangerous. What you don’t know can lead to project failure.
To state the obvious, a requirement is correct when it’s error-free.
But how would we know?
That’s a question with no Ultimate Answer, but requirements can be checked for errors, by stakeholders (including testers and developers) and “SMEs” (Subject Matter Experts) checking them for consistency, including:
- Internal consistency within the document;
- Consistency with source materials such as Marketing Plans or prior levels of specification;
- Consistency with internal and/or external standards; and
- Consistency with what the reviewers know (or believe) to be true, and necessary for them to do their jobs.
A requirement may be accepted as correct when it has passed all the above tests.
Correctness is related to several other attributes – ambiguity, consistency, verifiability, etc.—which I discuss below.
Testing a product against incorrect requirements is a WOMBAT—a Waste Of Money, Brains, And Time. So was building it.
A requirement is feasible if it can be satisfied, and can be proved to have been satisfied, by one or more developed products, at acceptable cost. For example, a requirement that a solution product should have no bugs is probably not feasible in terms of construction, and certainly not feasible in terms of testing—it would require effectively infinite time to prove it!
In some cases, requirements may have been proven “feasible” in previous products, either your own or someone else’s. “Evolutionary” or “breakthrough” requirements may be shown to be feasible through analysis and prototyping.
Essentially, this attribute sets a test of the practicality of the numerical value(s) in a requirement, including the relationship between factors such as the required scope and quality of the product, on the one hand, and constraints of time, money, and other resources, on the other. A requirement passes the “feasibility test” when we’re certain it can be satisfied in such a way that the associated technology costs fall within the cost constraints of the program.
The development team, with the testers engaged, should Prototype early and often to gather product knowledge.
A requirement is necessary under any of the following conditions:
- It’s dictated by business goals, strategy, roadmaps, or sustainability needs;
- It can be traced to a need expressed by a customer, end user, or other stakeholder;
- It’s inclusion is based on Market Segment Analysis or lateral benchmarking to be market competitive;
- It establishes a new product differentiator or usage model; and
- Deleting it will leave a hole which no other capability can fill.
Requirements must have demonstrable customer, end user, or business benefit, or else they are just cost-added, not value-added. There are no free features!
All requirements are in competition for limited resources, including test resources. Projects routinely attempt much more than what can be accomplished with the available resources; prioritisation helps enable good practices like:
- Scope management
- Planning for implementation
- Risk management
- Efficient and effective use of resources
—for all development activities, and especially for all testing activities. Prioritisation is very much a requirements testability issue.
There are many possible ways to prioritise, including
- Customer Value
- Development Risk
- Cost or Effort
- Competitive Analysis
You recognise good prioritisation when the requirements are distributed realistically among the priority levels; when multiple dimensions have been considered, such as cost, customer value, and development risk; and when all product stakeholders (or all types of stakeholder) have provided input to the prioritisation process.Several scales can be used for prioritisation, such as “Essential” versus “Desirable”, “must, ought, or may”, “High, Medium, Low”, or ordinal (numeric) scales for cost, value, etc.
Unprioritised testing may waste time and money testing unimportant features, and increase risk by running out of time to deal with more important ones.
A requirement is unambiguous when it possesses a single, clear, interpretation.
Ambiguity is often dependent on the background of the reader. It may be necessary to reduce ambiguity by providing definitions in a glossary, and to test for it by seeking feedback from the target audience to demonstrate, in their own words, what they have understood the requirement to mean.
“Weak” terms such as “easy”, “fast”, “adequate”, “sometimes”, “and so on”, “etc.”, etc., are open to any interpretation a reader may wish to put on them. They must be complete no-noes in any form of requirement specification.
It may be necessary, on some occasions at least, to enhance natural-language specifications with less ambiguous forms such as tables, diagrams, or special-purpose “specification languages” such as Structured English.
Ambiguity—the possibility of more than one meaning, with no clear way of telling which meaning is correct—is often related to lack of clarity, usually caused by the failure to provide necessary detail in the mistaken belief that the reading audience will understand it anyway. (See my remarks on Completeness above.) The tester’s problem is that a test based on an ambiguous requirement is likely to produce a result which will indicate a “pass” status to some people, but a “fail” status to others. What’s a poor tester to do? (And spare a thought for the developers, busy making up their own answers.)
An ambiguous requirement may result in an ambiguous product, and will result I ambiguous testing. Did it pass? Or did it fail?
Requirements that are consistent don’t contradict or duplicate other requirements in the same specification, or in any related document. You recognise a “consistent requirement” when:
- There’s only one definition of it, in only one specification, which is referenced by name wherever the requirement is needed;
- It’s internally consistent with other requirements at its level (marketing, business, product, testing …);
- It’s externally consistent with requirements at other levels (testing, product, business, marketing, …)
- It uses the same terms for the same concepts used in other requirements.
Consistent requirements use a consistent vocabulary, for which it may be necessary to introduce a standard glossary—especially if different groups within the same business use different terminology for the same ideas, activities, or things. Lack of consistency is a form of ambiguity: how is a tester to know which version is correct?
You may improve consistency (and maintainability) of requirements by referring to original statements where needed, rather than by repeating the same statements in multiple documents.
A requirement is traceable if it has a unique and persistent identifier, and its source is recorded. Amongst others, the following two varieties are recognised:
- Forward traceability: where does this requirement get used?
- Backward traceability: where did this feature originate?
Requirements traceability both helps to ensure the product is built as specified, and enables impact analysis for changes. Conciseness is a related issue here: large, rambling paragraphs are difficult to trace to or from. Crisp, concise, individually-identified statements are easiest.
There are many types of traceability, but the ability to trace requirements to test cases is a great place to start introducing it.
Untraceable requirements are difficult to verify. So are test cases built on them.
A concise requirement definition includes only one requirement, which is expressed with the minimum number of words necessary to define it (or other elements such as graphic components).
Often, it’s helpful to improve understanding by introducing ancillary elements which are not part of the actual requirement, such as explanations, examples, illustrations, or comments. These secondary elements must be presented separately from the primary elements, the actual requirements, and clearly identified as what they are: not requirements!
Given two statements that carry identical meaning, the shorter is to be preferred.
I have left this one till last because it brings us full-circle to the issue of “testability”.
There’s a good definition of requirements testability in the IEEE standard 1233 (“Guide for Developing System Requirements”):
testability. The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met.
A requirement is verifiable if it can be proved that the requirement was implemented in the end product. “Proof” may be by demonstration, by analysis, by inspection, or by test execution.
“Verifiability” relates to most of the prior attributes. Unverifiable requirements may be ambiguous, imprecise, not worth the cost to verify … For example:
- “The system shall not fail during the first 5 years of normal operation”
- “The product shall be easy to use”
- “Use of the device shall reduce fatalities by 30% in the event of a catastrophic failure”.
To be verifiable, each requirement must be expressed unambiguously, and a way must exist (such as those listed above) to prove unambiguously whether or not it has been met. It mustn’t require unreasonable time or cost to verify, and non-functional requirements in particular must be quantified on a defined, numeric scale of measure.
Failures in any of the requirements attributes are likely to result in costly difficulties with planning, designing, implementing, executing, and evaluating tests and test cases. Not to mention designing and building the software.
Failures in verifiability may make testing impossible.