Testing is some value to someone who matters


I have a concern. We online testers have one thing in common: we care enough about our craft to take the time and read these blogs. That’s all very fine. However most testers, and this is based on my perception not research, most testers do not read blogs, or articles, or books or go to conferences, to workshops or follow a course. Well some of those testers do, but only when they think their (future) employer wants them to. And when they do they go out for a certificate that proofs they did so.

Becoming a tester

Regardless of where we start our carreer, be it in software engineering, on the business side or somewhere else, most testers start out with some kind of introductory test training. In the Netherlands, where I live, most of the time that means you’re getting a TMap, or sometimes an ISTQB training. And my presumption is that you get a similar message on what testing is everywhere:

  • Establishing or updating a test plan
  • Writing test cases (design; procedures; scripts; situation-action-expected result)
  • Define test conditions (functions; features; quality attributes; elements)
  • Define exit criteria (generic and specific conditions, agreed with stakeholders, to know when to stop testing)
  • Test execution (running a test to produce actual result; test log; defect administration)

But there are likely to be exceptions. For instance at the Florida Institute of Technology where Cem Kaner teaches testing.

Granted neither TMap nor ISTQB limit testing solely to this. For instance TMap starts of by defining testing as: “Activities to determine one or more attributes of a product, process or service” and up to here all goes well, but then they add “according to a specified procedure”. And there is where things start to go wrong. In essence the TMaps of the world hold the key to start you testing software seriously.  But instead of handing you down the knowledge and guide you to gather your own experiences they supply you with fixed roadmaps, procedures, process steps and artifacts. All of which are obviously easy to use for reproduction in a certification exam. And even this still could still be, as these methods so often promote, a good starting point to move on and develop your skills. Unfortunately for most newbies all support and coaching stops once they passed the exam. Sometimes even facing discouragement to look beyond what they have learned.


To make matters worse the continuing stance to make testing a serious profession has brought line managers, project managers, other team roles and customers the message that these procedures, processes and artifacts not only matter, but they are what testing is about. These items (supposedly) show the progress, result and quality of testing being undertaken. Line and process managers find it easy to accept these procedures, processes and artifacts as measurement items as they are similar to what they use themselves according to their standards. So if the measurements show that progress is being made and that all artifacts have been delivered they are pleased with the knowledge that testing is completed. Customers or end users go along in this but face limits in their belief of these measurements as they actually experience the end product. Like testers they are more aware that testing is never really ended and about the actual information about the product and not about testing artifacts.


New methods and approaches such as agile testing have brought the development roles closer together and have created a better understanding for the need and content of testing to both the team and the stakeholders. Other approaches, like context driven testing, focus more on enhancing the intellectually based craftsmanship of testing, emphasizing the need for effective and efficient communication, the influence of context on all of this and that software development is aimed at solving something. And thus the aim of testing is shifting from checking and finding defects to supplying information about the product. Regardless of this however and inspite of how much I adhere to these approaches I think they have a similar flaw as the traditional approaches. Like TMap or ISTQB neither of them go outside of their testing container enough to change the management and customer perception. They still let management measure testing by the same old standards of counting artifacts and looking at the calendar.


I think we as a profession should seek ways of changing the value of testing to our stakeholders. To make them understand that testing is not about the process or procedure by which it is executed nor about its artifacts, but about the information and value to achieve business goals it supplies.

I myself cannot give you a pret-a-porter solution so I challenge you, my peers, to discuss with me if you agree on this vision and if you do to form a new approach for this together. I will gladly facilitate such a discussion and deliver its (intermediate) results to a workshop or conference at some point.


What’s in a name – Part 2, I am a quality assurer

This second post on the use of titles for software testing focusses on software quality assurance. Quality assurance, or QA, as such is not exclusive to software development. QA is practised in almost every other industry like e.g. car manufacturing or medicine.

Quality Assurance

Quality Assurance in essence involves monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with according to those same standards and procedures. Quality assurance is often seen as an activity to be executed independent of its subject.


The following two principles are included in the aim of Quality Assurance:

  • Fit for purpose; the product should be suitable for its intended purpose
  • Right the first time; faults in the product should be eliminated

The guiding principle behind Quality Assurance is that quality is malleable. This is achieved by working hierarchically according to detailed plans, use accurate planning and control, rationalise processes, standardise and uniform. In this sense quality assurance is continuation of Frederick W. Taylor’s ideas of Scientific Management. In line with this all skill and knowledge is transferred into standard processes, procedures, rules and templates and thus removed from individual workers.

Software quality assurance

By and in large software quality assurance follows the same lines of thought as quality assurance.

This part of the model then represents the assurance paradigm.

Like in quality assurance a tester in software quality assurance seeks the use of rationalized processes, standards and uniformity. She prefers the use of specific standards and methodologies. IEEE, ISTQB and TMap are well know examples of this. These methods provide the tester with (seemingly) clear guidelines to follow and artifacts to use so that if used properly software testing itself should provide adequate information about the state of the product and its readiness for use.

There is however a distinct difference between software quality assurance and quality assurance . In contrast with quality assurance in the traditional sence software does not have something tangible to measure. This is partly overcome with the use of software related quality attributes. Quality attributes are used to help guide the focus of testing and consist of characteristics that are considered typical to software development.

SQA – testing

Software quality assurance has the advantage that it is an understandable and recognizable proposition. To testers the appeal is that it provides a uniform solution for “all” situations and the handbook approach can be followed from the get go. To managers the appeal is that the procedures are recognizable in relation to mainstream management ideas and in combination with a phased interpretation of software development and testing, e.g. the V-Model, it is highly useful for project planning.

Following the process and filling templates alone (even) software quality assurance admits does not produce tests. So software quality assurance has teamed up with software engineering in that it uses test design techniques to identify and describe tests. In turn software quality assurance has developed means to limit the scope of testing with using a combination of quality attributes and risk assessment to order and if necessary limit the amount of tests to be designed and/or executed.


In practise the use of software quality assurance, together with test engineering, has admittedly helped to establish software testing as a craft of its own. It has put software testing on the map and in general software testing is seen as part of the development process. But the use of software quality assurance as such has been no guarantee in stopping the development of ‘bad’ software. With bad software I mean software that did not provide a solution for the problem it was intended for.

In response to this limit to success several lines of improvement have evolved. One of them is to quality assess the software quality assurance itself with the use of models like TMMi or Test Process Improvement. These might help to forge a smoother adherence to the standards, procedures and templates, but the emergence of better software seems to be more of a side effect then that is the result of these new models themselves.

A second response is to more stress that quality assurance of software is an independent activity performed in separate (QA) departments that kind of police software development. Although this helps to adhere to external regulations and audit rules it has little effect in the pursuit of building good software that solves the or is fit to be used for problem it is intended for.

These models, and others like them, have widened the gap between quality assurance through process and the act of testing software through skill. So much so that increasingly quality assurance and testing are considered as different activities that oppose each other. In my opinion we need both aspects of software development to do a good job. How we mix them however should depend on the context in which you apply them. But in whichever context the testing of software should have the purpose of delivering useful information about the product. To do this, in my opinion testing skills come first and then knowledge and use of standards or procedures follows to aid or enhance the act of delivering useful information.