Is testing dead?

“Test is dead”

Last year, 2011, Alberto Savoia, presented the opening keynote at GTAC with the title “Testing is dead”. Alberto started with an all to recognizable Old Testmentality:

  • Top down
    • Thou shalt follow the spec
  • Rigid
    • Thou shalt not deviate from the plan
  • Distinct roles and responsibilities
    • Developers shalt develop
    • Testers shalt test
    • And never the twain shalt meet
  • Do not release until ready
    • Thou shalt not sell wine before it’s time

He concludes this with declaring that in essence the focus is on building it right.

Alberto’s then takes an elaborate detour to arrive at the conclusion that in the New Testmentality the focus is on building the right it. While doing this he comes to the conclusion that success, in the Post Agile era, does not depend on testing or on quality but on building the right it at the highest speed (to get the best realistic marketing edge). And so you do not actually need to test at all. Well…. at least at the start, in the right environment…. I could go on but I think Mark Tomlinson’s blog post says just the right it.

The reactions

Alberto’s keynote and later presentations like James Whittaker’s at EuroSTAR have spawned a load of critical reactions from the testing community. (At least the part that I am following.) Most of them were even downright negative and dismissive. As an example a quote of one of my fellow DEWT’s: “Ik vind het  ‘Test is Dead’-paradigma in ieder geval tijdloze flauwekul” (in English this translates to “In my opinion the ‘Test is Dead’ paradigm is to be considered timeless nonsense”). I can understand these reactions and based on the stories and without critically thinking over what the paradigm was saying I had a similar mindset.

Requital

In short the ‘Test is dead’ paradigm argues that test will disseminate into two directions. It will either move down to the developers or it will move up to the users. The arguments for the movement down to development are that software in general has gotten better; that there are more possibilities to quickly fix in production, that software is able to self repair; that there are more standards,better software languages and that software is no longer localized but available in the, more stable, cloud. The arguments for moving testing up to the users are that the current testing slows down the development process and that it imitates user behaviour. Since speed has more value than quality and users can do a better job at being users then testers this kind of testing is no longer necessary.

I agree with the observation. It is true that software nowadays is different then the software that was produced when the first major test approaches were established. The shift to web-based software (delivery) and the growing knowledge and acceptance by the public of software updates has changed the playing field. I think a lot of what testers (still) do nowadays can be done by developers as well. Particularly the stuff I heard a fellow tester sigh about during a TestNet event “Come on do I still have to test input fields and buttons on correctness. Why don’t those lazy programmers write unit tests as they are supposed to.”. Oh and yes there are loads of testers that go and sit behind their computer and punch keys as if they were users. But are they really testers…..

I do not agree with the conclusion. When it comes to testing business logic, calculations, multi state or multi integrated programs, security, or usability more and bigger unit tests really do not cut it. Yes they take out the more or less obvious bugs, but still leave the less obvious and unimagined ones mostly untouched. If you want to catch those you need sapient testers who are able to use their investigative skills. Who are able to cooperate with and understand developers, business analysts and users. Testers who adjust their skills and the use of those skills to the context in which they work. Only these kind of tester can smoke out bugs that otherwise would have gotten away and provide information that allows others to make the right decisions. Even Alberto Savoia himself limits his arguments when he says during his keynote that eventually you have to build it right also and that security sensitive, risky or regulated software still needs a testing process.

So is testing dead?

Yes if you mean factory style mindlessly following standards and pre-scripted testing.

No if you mean sapient critical skillful and context-driven testing.

or in other words

Testing is dead, long live testing!

Testing is some value to someone who matters

Concern

I have a concern. We online testers have one thing in common: we care enough about our craft to take the time and read these blogs. That’s all very fine. However most testers, and this is based on my perception not research, most testers do not read blogs, or articles, or books or go to conferences, to workshops or follow a course. Well some of those testers do, but only when they think their (future) employer wants them to. And when they do they go out for a certificate that proofs they did so.

Becoming a tester

Regardless of where we start our carreer, be it in software engineering, on the business side or somewhere else, most testers start out with some kind of introductory test training. In the Netherlands, where I live, most of the time that means you’re getting a TMap, or sometimes an ISTQB training. And my presumption is that you get a similar message on what testing is everywhere:

  • Establishing or updating a test plan
  • Writing test cases (design; procedures; scripts; situation-action-expected result)
  • Define test conditions (functions; features; quality attributes; elements)
  • Define exit criteria (generic and specific conditions, agreed with stakeholders, to know when to stop testing)
  • Test execution (running a test to produce actual result; test log; defect administration)

But there are likely to be exceptions. For instance at the Florida Institute of Technology where Cem Kaner teaches testing.

Granted neither TMap nor ISTQB limit testing solely to this. For instance TMap starts of by defining testing as: “Activities to determine one or more attributes of a product, process or service” and up to here all goes well, but then they add “according to a specified procedure”. And there is where things start to go wrong. In essence the TMaps of the world hold the key to start you testing software seriously.  But instead of handing you down the knowledge and guide you to gather your own experiences they supply you with fixed roadmaps, procedures, process steps and artifacts. All of which are obviously easy to use for reproduction in a certification exam. And even this still could still be, as these methods so often promote, a good starting point to move on and develop your skills. Unfortunately for most newbies all support and coaching stops once they passed the exam. Sometimes even facing discouragement to look beyond what they have learned.

Non-testers

To make matters worse the continuing stance to make testing a serious profession has brought line managers, project managers, other team roles and customers the message that these procedures, processes and artifacts not only matter, but they are what testing is about. These items (supposedly) show the progress, result and quality of testing being undertaken. Line and process managers find it easy to accept these procedures, processes and artifacts as measurement items as they are similar to what they use themselves according to their standards. So if the measurements show that progress is being made and that all artifacts have been delivered they are pleased with the knowledge that testing is completed. Customers or end users go along in this but face limits in their belief of these measurements as they actually experience the end product. Like testers they are more aware that testing is never really ended and about the actual information about the product and not about testing artifacts.

So?!

New methods and approaches such as agile testing have brought the development roles closer together and have created a better understanding for the need and content of testing to both the team and the stakeholders. Other approaches, like context driven testing, focus more on enhancing the intellectually based craftsmanship of testing, emphasizing the need for effective and efficient communication, the influence of context on all of this and that software development is aimed at solving something. And thus the aim of testing is shifting from checking and finding defects to supplying information about the product. Regardless of this however and inspite of how much I adhere to these approaches I think they have a similar flaw as the traditional approaches. Like TMap or ISTQB neither of them go outside of their testing container enough to change the management and customer perception. They still let management measure testing by the same old standards of counting artifacts and looking at the calendar.

Challenge

I think we as a profession should seek ways of changing the value of testing to our stakeholders. To make them understand that testing is not about the process or procedure by which it is executed nor about its artifacts, but about the information and value to achieve business goals it supplies.

I myself cannot give you a pret-a-porter solution so I challenge you, my peers, to discuss with me if you agree on this vision and if you do to form a new approach for this together. I will gladly facilitate such a discussion and deliver its (intermediate) results to a workshop or conference at some point.

What’s in a name – Part 3, I am a tester

This third post on the use of titles for software testing focuses on being a tester. Not that if you are a software test engineer or in software quality assurance (previous posts)  you do not test or can think of yourself being a tester. In my model however I am making a distinction. I believe that being called, or calling yourself, a tester affects the way you see your function or role in such a way so that it is different from that of being a test engineer or quality assurer.

So what is a tester?

Well to be honest it differs. To start-off  I see two sorts of distinctions to be made in defining what a “tester”  is. The first one has to do with how you approach the title. From an employers perspective, for instance, looking for a “tester” often means that the employer has not felt the need to be more specific. So a tester in this situation can be anything from a software engineer,  a quality assurer or all the way up to someone pushing keys for a test that  someone else scripted. As uninformative as the job title might be the job description that goes with it usually is much more descriptive. It will tell you if the company favors testing as engineering, quality assurance, as something tailored to their needs or simply as any other job.

That “any other job” approach to being a tester forms the first part of my idea of how testers see themselves. To some testers being a tester really does not mean much more than doing what the “boss” asks you to do. Perhaps that they have an affection to IT or a nag in analyzing stuff, but generally they just as likely could have been doing a completely different job to make a living or to build a career.

This section of the model represents the any-job paradigm

The any-job paradigm does not mean you are bad at wat you do. It’s just that you are not doing it to be a tester. You do it because you need the money, you are on route for something higher, you happened to  end up there, or for whatever other reason. And now that you are a software tester you will do that for as long as nothing better comes along. You will most likely even educate yourself to what you perceive is essential or to a level someone, that matters, tells you to. But since your heart is not in it,  you go for  “proof of knowledge”  by showing attendance lists or certificates and likely value that over the actual use of practical knowledge.

My kind of tester

In contrast to this I see a tester who even if he became a software tester by chance, now that he is puts his heart into it. This kind of tester educates himself to get better, he follow trends by reading magazines, blogs and testing books. He visits conferences, workshops and discusses testing with his peers. This tester thinks of his job as a skilled craft. A craft that continuously requires his skills to be sharpened.

This section of the model represents the craftsmanship paradigm

I am sure there are testing engineers or quality assurers that perceive what they do as skilled and they also educate themselves and go to conference. The essential difference however is that a software craftsman goes the extra mile. He does not limit his focus to a technical or process approach. A software testing craftsman has a wider view upon the world. He also gathers technical-and quality assurance skills, but just as likely he gathers analytical, social, management or other skills. Furthermore he does not value one over the other. He lets the usefulness depend on the problem to solve and to the context in which it occurs.

Currently I like to think of myself as a, context driven, software testing craftsman. At what level of craftmanship I am I leave for others to judge.

I do not see my self as a master in software testing yet.

I am however aiming to get there….