What do you put in your test plan?

Theme event

This evening, January 18, I will be on stage during the TestNet theme event about Context-Driven Testing. The evening will start with a duo presentation by James Bach and Michael Bolton followed by a series of short presentations, lightning talks and discussion. In one of those lightning talks I will share a personal experience report. It describes how I changed the way I make test plans. Since most of you will not be able to be there (or might have never heard about TestNet*) I am sharing my experience with you in this post.


Even if a lightning talk last only for five minutes it still requires some preparation. So to extra prepare myself I placed my experience into perspective and placed the following message on Twitter:

” @Arborosa: Question to my tweeps: What items do you put in a test plan?  I’ll put the results on my Blog. (please retweet)”

This resulted in the following responses:

Rob van Steenbergen (@rvansteenbergen)
Scope of testproject, context of product (with mindmap), product risks and qlty attributes and risk approach, planning, who tests, stakeholders, testing tools, explanations abt testing for orgs that are still learning. TP is also promotion material for testing.

Stephan Kämper (@S_2K)
Well, what to put in a plan? A (current) goal of what you’re planning. The major way you’ll follow to reach said goal. A ‘Plan B’. (Known) risks – What’s the risk of following the plan? …the risk of *not* following it? Tools & Techniques? Not sure about these.

Nitin Hatekar (@nhatekar)
Entry and exit criteria for each test phase and specific test approach for each phase. Scope of testing and the estimates for completion of in-scope test efforts. A section for assumptions, risks & blockers as well.

Rik Marselis (@rikmarselis)
For your testplans take IEEE829 (1998) as a starting point. And see tmap.net for templates 😉 (And after a reprimand by Huib Schoots to be more serious) Don’t start with making the testplan. First make the outline of the testreport. That’s your deliverable! The testreport outline must be discussed with stakeholders. Then you have startingpoint for your testplan.
Jesper L. Ottosen (@jlottoosen)
Generally answers and descriptions to “how” – to the level required of the context. ie #itdepends 😉
Jan Jaap Cannegieter (@jjcannegieter)
Write in your testplan the info your stakeholders need. So ask your stakeholders what kind of info they need. Write it for them!
Generally I see two trains of thought here. On the one side there is the idea of having more or less fixed items in a test plan. Things like scope, approach and (product) risks. On the other side the idea to not start with fixed items or a template, but to ask the stakeholders what information they need to have in the test plan. As you will see this kind of follows the change I made.
From old to new
Historically my organization has approached software testing by following a standardized test approach based on TMap. Similarly test planning is, or rather was, based on an extended TMap style “Master Test Plan” template. The raw template itself counts 24 pages when empty, but includes some examples and explanation. The idea is to fully fill in all items in the template, see list below, and get it signed off by the principal stakeholders.
In short the template was as follows (ChaptersParagraphs and Sub-paragraphs):
Colophon Strategy
Management summary Test levels
Goals Entry – exit criteria
Preconditions Test objects
Budget and milestones Scope
Assignment Dependencies
Introduction Project risks
Assignment Communication & Procedures
Client Reporting
Assignee Meetings
Scope of the assignment Procedures
Test basis Test product / Deliverables
Objective Project documentation
Preconditions Testware
Starting points Test Infrastructure
Release from assignment Workplace
Test strategy Test environment
Product Risk Analysis Budget, planning & organization
Test goals Budget
Compenent per characteristic Planning
Test goal vs component matrix Team composition

I have to admit that all items in themselves are in some way relevant to testing software. But one can argue the usefulness of some of these items and more so of having these items together in one document.

The latter is best illustrated by a remark my mentor made when after three months, of being a professional tester, I was writing my first Master Test Plan. He said: “Don’t waste time. Take one of my plans. Ctrl-H the project name, change the stakeholders and check if there is mention of specifics not relevant to your project and change them. All else you can leave the same. So, even if I resisted the idea, like my colleagues I learned to do the drill; fill the template in a copy and paste style. Only occasionally I had a stakeholder question what was in it and disturb from actual testing.

Some five years ago I changed departments and found myself in a place in which I not only was free to use only those elements that I felt were useful, but I could start changing the template and the use of it entirely. But there was resistance both from the testers and the stakeholders. The testers, I think, because some of them now had to think and communicate more and the stakeholders because this broke with the standard process and they too would have to get involved and think more. To break the deadlock I started with an experiment. I filled in the template not only complete but to the letter of the “law”. I ended up with a 36 page document which I immediately send out to all stakeholders with an invitation to meet next week, meantime thoroughly check it and be ready to sign off the document during the meeting.

At the meeting the stakeholders were sitting silently, sighing at the thought of having to go through all the 36 pages. I didn’t do that. Instead I asked how many of them had read the document. With 6 out of 8 I was actually impressed. I then asked  how many of them had reviewed it. Still 4 out of the 6. I then asked who found the document a pleasure to read, who fully understood its content and thought it was of value to the project. As I hoped for all the attendees broke out in commenting the document, its length, its irrelevance, the difficulty of the content etc….

I decided to then pull out the rabbit and said “I agree with you all. I too think it’s basically of no use. There is no point in reviewing it. But we still need to write a test plan. So why don’t you tell me what you actually do want to know about testing your product.”

We spend an hour or so discussion what they wanted to know about testing. Agreed that since we are a financial institution we still have to follow certain rules, regulations and guidelines and that I would deliver a new document the same week.

I ended up writing a document that was still 24 pages long. But now it not only adhered to the documentation standards but of those 24 pages 11 were purely related to testing as way to mitigate risks and provide information about the product for this project and another 4 on testing and test heuristics in general. The original document had no  explanation and only 8 pages related to actual testing.


Approach writing a test plan as you would approach any test activity. Figure out what information your stakeholders need, if there are other things to consider like rules, regulations or standards. Use your personal experience and other references you think useful and then write a plan that suits your model of the context and verify and confirm it with your stakeholders.


Is testing dead?

“Test is dead”

Last year, 2011, Alberto Savoia, presented the opening keynote at GTAC with the title “Testing is dead”. Alberto started with an all to recognizable Old Testmentality:

  • Top down
    • Thou shalt follow the spec
  • Rigid
    • Thou shalt not deviate from the plan
  • Distinct roles and responsibilities
    • Developers shalt develop
    • Testers shalt test
    • And never the twain shalt meet
  • Do not release until ready
    • Thou shalt not sell wine before it’s time

He concludes this with declaring that in essence the focus is on building it right.

Alberto’s then takes an elaborate detour to arrive at the conclusion that in the New Testmentality the focus is on building the right it. While doing this he comes to the conclusion that success, in the Post Agile era, does not depend on testing or on quality but on building the right it at the highest speed (to get the best realistic marketing edge). And so you do not actually need to test at all. Well…. at least at the start, in the right environment…. I could go on but I think Mark Tomlinson’s blog post says just the right it.

The reactions

Alberto’s keynote and later presentations like James Whittaker’s at EuroSTAR have spawned a load of critical reactions from the testing community. (At least the part that I am following.) Most of them were even downright negative and dismissive. As an example a quote of one of my fellow DEWT’s: “Ik vind het  ‘Test is Dead’-paradigma in ieder geval tijdloze flauwekul” (in English this translates to “In my opinion the ‘Test is Dead’ paradigm is to be considered timeless nonsense”). I can understand these reactions and based on the stories and without critically thinking over what the paradigm was saying I had a similar mindset.


In short the ‘Test is dead’ paradigm argues that test will disseminate into two directions. It will either move down to the developers or it will move up to the users. The arguments for the movement down to development are that software in general has gotten better; that there are more possibilities to quickly fix in production, that software is able to self repair; that there are more standards,better software languages and that software is no longer localized but available in the, more stable, cloud. The arguments for moving testing up to the users are that the current testing slows down the development process and that it imitates user behaviour. Since speed has more value than quality and users can do a better job at being users then testers this kind of testing is no longer necessary.

I agree with the observation. It is true that software nowadays is different then the software that was produced when the first major test approaches were established. The shift to web-based software (delivery) and the growing knowledge and acceptance by the public of software updates has changed the playing field. I think a lot of what testers (still) do nowadays can be done by developers as well. Particularly the stuff I heard a fellow tester sigh about during a TestNet event “Come on do I still have to test input fields and buttons on correctness. Why don’t those lazy programmers write unit tests as they are supposed to.”. Oh and yes there are loads of testers that go and sit behind their computer and punch keys as if they were users. But are they really testers…..

I do not agree with the conclusion. When it comes to testing business logic, calculations, multi state or multi integrated programs, security, or usability more and bigger unit tests really do not cut it. Yes they take out the more or less obvious bugs, but still leave the less obvious and unimagined ones mostly untouched. If you want to catch those you need sapient testers who are able to use their investigative skills. Who are able to cooperate with and understand developers, business analysts and users. Testers who adjust their skills and the use of those skills to the context in which they work. Only these kind of tester can smoke out bugs that otherwise would have gotten away and provide information that allows others to make the right decisions. Even Alberto Savoia himself limits his arguments when he says during his keynote that eventually you have to build it right also and that security sensitive, risky or regulated software still needs a testing process.

So is testing dead?

Yes if you mean factory style mindlessly following standards and pre-scripted testing.

No if you mean sapient critical skillful and context-driven testing.

or in other words

Testing is dead, long live testing!

250 hours of testing practice

The promise

On January 3, 2011 Phil Kirkham posted a question on the Software testing club:

“so if you were to set a target of doing 2 hours practice a week every week this year, how would you spend your 100 hours ?”

Having missed the post initially I read the post as early as the week before Christmas. So I really had not enough time left to get to 100 hours in 2011. After reading the post and comments I felt however that Phil was making a valid point. One should spent time and effort on practicing and in my comment on his post I made the following promise:

“As 2012 is on the brink of starting I will try to put this into practice. As two hours seems a bit low I will spent 5 hours per week on practicing and some extra time on logging and writing short (monthly) posts about it.”


So today is January 1st and I am starting to live up to my promise. Every week of this year, except for the summer holidays, I will try to practice for at least 5 hours and log the things I do. At the end of every month I will write a post sharing my activities, providing short reviews and formulate my insights.

I have started practicing earlier today by reading “Essential Software Test Design” by Torbjörn Ryber. A book that I had downloaded as a PDF before and of which I found after a number of pages and comments from fellow DEWT’s that I wanted to have the hard copy. Later today I will make time to listen to TWiST # 76. I am not yet sure what I will do for the rest of the month, but as said before I will keep you posted.

For now I wish all of you a wonderful, succesful and entertaining 2012 and I hope to meet lots of you in person this year!