Software test Automation by Mark Fewster and Dorothy Graham
Published by Addison-Wesley
Reviewed by Peter Morgan
Date 28 June 1999
Paperback; pp 600
Do not assume that the cost of an automation tool is the total cost of “automation”, or even the major cost. This is one of the major themes of this book, and although it is about automating the test process, automation sometimes takes a back seat to the idea of testing as a process. The authors are widely respected in the wider testing community, and they bring a wealth of experience to illustrate the idea that buying an automated test tool is only a very small part of “automation”.
Examples abound in the text; the second part of the book comprises guest chapters, with success and failure stories from real life companies (and names are named). Other examples in the first part are labelled ‘experience reports’, and vary from three or four lines to perhaps three-quarters of a page, all bringing the reader back to reality. Time and again, the importance of a good test process is emphasised, in preference to automation purely for the sake of it.
A test automation tool can be part of the testing process. The aim of testing is to aid the delivery of quality software in a timely and cost-effective manner. Therefore, unless automation aids at least one of quality [improvement], cost and time scale [reduction in these last two cases], it has no place in the testers arsenal. However, “automation” does not just mean the use of capture / playback during test execution. In manual testing, execution and the analysis of deviations (from the expected results) go hand in hand. Not so in automated testing. There is a need to make sure that test practitioners don’t spend needless time analysing failed tests, where the failure is the same as another failed test. There is a suggested strategy for this, so that even if no automation takes place, the test process as a whole improves. Examples of extending automation are given, ranging from file comparison tools, to the production of management reports directly from the automation driver.
In automation, there is a fourfold pattern: excitement, disenchantment, disillusionment, and finally gain. It is only is the path is pursued that the gain is enjoyed. Many give up too soon, which results in “shelf-ware”. The authors readily admit that the automation process gives confidence, rather than finding new defects. It is widely acknowledged that 60 – 80% of defects are found during the first runs of tests, before the real value of any automation has begun. That, however, does not make invalidate automation.
There are very good sections on measuring, including a good explanation of Defect Detection Percentage (DDP), and both tool selection, and implementing the chosen tool. A recurring theme is the need to plan, and organise before you DO. I found the section on testware artefacts too long; after the need to make sure that there is a place for everything, the same explanation was repeated for each sub-directory, or type of process. There is no real detail of the types of test tools that are available. This is both an advantage [because it does not date], and a disadvantage, but there are pointers to where some of this detail can be found.
The book is nearly four years old, and is still the best available on automation. Assumptions are challenged (should a test have only two possible outcomes; ‘pass’ or ‘fail’? What about ‘expected failure’], and even if automation is either not considered, or (as a result of the selection process detailed herein) not implemented, the book will do you good.
May I leave you with a quote? After introducing the notion of a ‘tool champion’ as part of the tool implementation process, the authors have the following to say about this key position: “Having a good champion does not guarantee a successful implementation, but not having a champion probably guarantees an unsuccessful one”
Published on-line on Unicom’s Testing Bulletin, Edition 7, February 2003
Reviewed by Peter Morgan, Principal Consultant