Test Types – A

This sixth post in the series of software testing overviews introduces the first of some 80+ different test types. That number itself is completely arbitrary. While searching for and investigating testing definitions I found over a hundred definitions of test types and I have chosen to leave out a number of ‘test types’. My choice to do so is based on the interpretation that some test types rather described a test level or a test technique and I could not see how to make a useful test type out of them.

So what then do I call a test type?

To me a test type is a particular subject of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

While going through my overview you might find that some of the test types I mention do not entirely fit the narrow description of a test type as provided above. The reason that they are mentioned in spite of this is that I felt that they were so often mentioned as a test type that they should have a mention in this post just for that reason.

This post differs somewhat from the earlier posts as the definitions used are often rewritten by me to form a single aggregate definiton as many different ones for the same term exist. Where useful I have added comments as additional information. Also since a post with over 80 descriptions would be too long I have split up the overview into alphabetical sections. To begin with the letter A.

Just as a reminder these are not necessarily my definitions but a collection of definitions I encountered. Finally there are no sources or attributions for the individual test types as this would make this a totally different exercise.

A/B testing

A/B testing originates from marketing research used to investigate the more efficient of two possible solutions by presenting them to two different groups and measuring the ‘profits’. In software testing it is mostly used to compare two versions of a program or website, often of which only one contains changes on a single or a few controllable criteria.

Acceptance testing
During acceptance testing requirements, variables, parts of a program or specific behaviour of a program is compared against measurable aspects of predetermined acceptance criteria. This requires at least four things.
First identification of the requirements, variables and parts or behaviour (coverage). Second expressing these in measurable aspects.
Third the aspects need to represent the defining elements of the acceptance criteria. Finally the acceptance criteria themselves should represent the needs and wants of the stakeholder. The goal of this interpretation of acceptance testing is to provide stakeholders the possibility to accept the software. And provide the possibility of sign-of.

A more pragmatic way to look at acceptance testing is to allow the stakeholders, often end users, to evaluate the software and see if it meets there expectations and (operational) needs. In practice this is often done by proxy by stakeholder representatives. Sometimes the testers are selected as the representatives. I do not think that is a good idea as testers then step outside of there boundary of information provider, are often commited to the solution and their role in creating it and more essentially testers are not the end users. 

Active testing
Testing the program by triggering actions and events in the program and studying the results. To be honest I do not consider this a test type as in my opinion this describes nearly all types of testing.

Ad-hoc testing

Ad-hoc testing is software testing performed without explicit prior planning or documentation on direction of the test, on how to test or on which oracles to use.
Some definitions see this as informal, unstructured and not reproducible. Informality and being unstructured (if seen as unprepared) is certainly true as this would be the point of doing it ad-hoc. The not reproducible part depends on whether you care to record the test progress and test results. Something that is in my opinion is not inherently attached to doing something ad-hoc but highly advisable. Why elso would you be testing if you are able to tell the testing story.

Age testing
It is a testing technique that evaluates a system’s ability to perform in the future. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing.
To be honest I found only one reference to this test type but I find the idea interesting. 

Agile testing
Agile testing is mentioned often as a test type or test approach but I have added no definition or description here. In my opinion agile testing is not a software test type. Agile testing rather is a particular context in which testing is performed that may have its particular challenges on test execution, on how tests are approached and on choices of test tooling but not a specific test type. 

Alpha testing
Alpha testing is an in-house (full) integration test of the near complete product that is executed by others than the development team but still is executed in a development environment. Alpha testing simulates the products intended use and helps catch design flaws and operational bugs.
One could argue that this is more of a test level than a test type. I care to view it as test type because it is more about the type of use and its potential to discover new information than that it is part of the software development itself. I specifically disagree with the idea that this is an intermediary step towards, or is part of, handing over software to a Test/QA group as some definitions propose. In my opinion testing is integrated right from the start of development up until it stops because the product ends its life-cycle or due to some other stopping heuristic. 

API testing
API testing involves testing individual or combined inputs, outputs and business rules of the API under investigation.
Essentially an API is a device-independent, or component-independent access provider that receives, interprets/transforms and sends messages so that different parts of a computer or programs can use each other’s operations and/or information. Testing an API is similar to testing in general albeit that an API has a smaller scope has, or should have, specific contracts and definitions that describe the API’s specific variables, value ranges and (business) rules. Testing an API should however not be limited to the API alone. Sources, destinations (end-points), web services (e.g. REST, SOAP), message types (e.g. JSON, XML), message formats (e.g. SWIFT, FIX, EDI, CSV), transport- (e.g. HTTP(S), JMS, MQ)  and communication protocols (e.g. TCP/IP, SMTP, MQTT, TIBCO Rendezvous) all influence the overall possibilities and functionality of the API in relation to the system(s) that use(s) the API. Typically API testing is semi- or fully automated and requires sufficient tool, message type, and transport- and communication protocol knowledge to be executed well.

An alternative meaning of API testing is testing by using an API. In this case the API is a means to an end in gaining access to the subject under test and feeding it with data or instructions and gathering responses.

Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claims Testing

  • The product should behave the way some document, artifact, or person says it should. The claim might be made in a specification, a Help file, an advertisement, an email message, or a hallway conversation, and the person or agency making the claim has to carry some degree of authority to make the claim stick. (Michael Bolton; Testing without a map, 2005)
  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)


  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)


One on one

A couple of days ago I was in one of those meetings with my manager where we discuss one on one how things are going and we if we have questions or problems in which we can help each other. Since things are going well enough we mostly spent time talking about work in general, project progress and personal interests. At the end of this particular meeting my manager asked me a surprising question:

“Why do you call yourself a senior test analyst?”

I fell silent for a moment. Officially my job title states Test Analyst C so he had a point there. My response held several aspects. First of all my official job title doesn’t have any meaning outside of the office and I am often meeting or interviewing people from outside of work. So I feel more comfortable calling my self senior test analyst then.

“Okay, I understand the test analyst part, but why senior” he responded

Well, I stated, because I believe that I am senior based both on my amount of experience,  skill and the effort I put in to my work. And using this title makes it easy to recognize what I do and at which level I am.

“I see” he said “But don’t you think that others can see that you are senior based on what you say and what you do and mean to our community and to the test community in general?” “Nobody calls themselves junior or medior. I only see people using senior. So why don’t you think about why you use it and if you really need it.”

Does it matter?

Well he got me thinking. Last year I published a series of three posts, https://arborosa.org/2011/08/17/whats-in-a-name-part-3/, on using names and titles. Part of the message in these posts is that the title you carry influences how others perceive your work, but also that it influences how you perceive and execute your work yourself. I ended the last post with the following remark:

“Currently I like to think of myself as a, context driven, software testing craftsman. At what level of craftsmanship I am I leave for others to judge.” 

Even my personal business card states me to be “Software Tester”. So why did I start using Senior Test Analyst when I talk about myself?

Having thought about it has me come to the following. It is not that I feel under-appreciated at work (although I wouldn’t mind a pay rise;) nor that I feel the need to emphasize my level of ability or knowledge. It is more my perception of how non testers value the profession. And for some reason adding the senior in front of test analyst seems to add some level of authority and importance.

How about you?

It is my opinion that I am not the only one who feels that (good) testers, by what ever name they go, do not get the appreciation they deserve. So my question to you is:

What do you do to convince non testers of the importance, skill and effort that go into your job? Or do you think that it’s all their problem?


Testing is some value to someone who matters


I have a concern. We online testers have one thing in common: we care enough about our craft to take the time and read these blogs. That’s all very fine. However most testers, and this is based on my perception not research, most testers do not read blogs, or articles, or books or go to conferences, to workshops or follow a course. Well some of those testers do, but only when they think their (future) employer wants them to. And when they do they go out for a certificate that proofs they did so.

Becoming a tester

Regardless of where we start our carreer, be it in software engineering, on the business side or somewhere else, most testers start out with some kind of introductory test training. In the Netherlands, where I live, most of the time that means you’re getting a TMap, or sometimes an ISTQB training. And my presumption is that you get a similar message on what testing is everywhere:

  • Establishing or updating a test plan
  • Writing test cases (design; procedures; scripts; situation-action-expected result)
  • Define test conditions (functions; features; quality attributes; elements)
  • Define exit criteria (generic and specific conditions, agreed with stakeholders, to know when to stop testing)
  • Test execution (running a test to produce actual result; test log; defect administration)

But there are likely to be exceptions. For instance at the Florida Institute of Technology where Cem Kaner teaches testing.

Granted neither TMap nor ISTQB limit testing solely to this. For instance TMap starts of by defining testing as: “Activities to determine one or more attributes of a product, process or service” and up to here all goes well, but then they add “according to a specified procedure”. And there is where things start to go wrong. In essence the TMaps of the world hold the key to start you testing software seriously.  But instead of handing you down the knowledge and guide you to gather your own experiences they supply you with fixed roadmaps, procedures, process steps and artifacts. All of which are obviously easy to use for reproduction in a certification exam. And even this still could still be, as these methods so often promote, a good starting point to move on and develop your skills. Unfortunately for most newbies all support and coaching stops once they passed the exam. Sometimes even facing discouragement to look beyond what they have learned.


To make matters worse the continuing stance to make testing a serious profession has brought line managers, project managers, other team roles and customers the message that these procedures, processes and artifacts not only matter, but they are what testing is about. These items (supposedly) show the progress, result and quality of testing being undertaken. Line and process managers find it easy to accept these procedures, processes and artifacts as measurement items as they are similar to what they use themselves according to their standards. So if the measurements show that progress is being made and that all artifacts have been delivered they are pleased with the knowledge that testing is completed. Customers or end users go along in this but face limits in their belief of these measurements as they actually experience the end product. Like testers they are more aware that testing is never really ended and about the actual information about the product and not about testing artifacts.


New methods and approaches such as agile testing have brought the development roles closer together and have created a better understanding for the need and content of testing to both the team and the stakeholders. Other approaches, like context driven testing, focus more on enhancing the intellectually based craftsmanship of testing, emphasizing the need for effective and efficient communication, the influence of context on all of this and that software development is aimed at solving something. And thus the aim of testing is shifting from checking and finding defects to supplying information about the product. Regardless of this however and inspite of how much I adhere to these approaches I think they have a similar flaw as the traditional approaches. Like TMap or ISTQB neither of them go outside of their testing container enough to change the management and customer perception. They still let management measure testing by the same old standards of counting artifacts and looking at the calendar.


I think we as a profession should seek ways of changing the value of testing to our stakeholders. To make them understand that testing is not about the process or procedure by which it is executed nor about its artifacts, but about the information and value to achieve business goals it supplies.

I myself cannot give you a pret-a-porter solution so I challenge you, my peers, to discuss with me if you agree on this vision and if you do to form a new approach for this together. I will gladly facilitate such a discussion and deliver its (intermediate) results to a workshop or conference at some point.

What’s in a name – Part 2, I am a quality assurer

This second post on the use of titles for software testing focusses on software quality assurance. Quality assurance, or QA, as such is not exclusive to software development. QA is practised in almost every other industry like e.g. car manufacturing or medicine.

Quality Assurance

Quality Assurance in essence involves monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with according to those same standards and procedures. Quality assurance is often seen as an activity to be executed independent of its subject.


The following two principles are included in the aim of Quality Assurance:

  • Fit for purpose; the product should be suitable for its intended purpose
  • Right the first time; faults in the product should be eliminated

The guiding principle behind Quality Assurance is that quality is malleable. This is achieved by working hierarchically according to detailed plans, use accurate planning and control, rationalise processes, standardise and uniform. In this sense quality assurance is continuation of Frederick W. Taylor’s ideas of Scientific Management. In line with this all skill and knowledge is transferred into standard processes, procedures, rules and templates and thus removed from individual workers.

Software quality assurance

By and in large software quality assurance follows the same lines of thought as quality assurance.

This part of the model then represents the assurance paradigm.

Like in quality assurance a tester in software quality assurance seeks the use of rationalized processes, standards and uniformity. She prefers the use of specific standards and methodologies. IEEE, ISTQB and TMap are well know examples of this. These methods provide the tester with (seemingly) clear guidelines to follow and artifacts to use so that if used properly software testing itself should provide adequate information about the state of the product and its readiness for use.

There is however a distinct difference between software quality assurance and quality assurance . In contrast with quality assurance in the traditional sence software does not have something tangible to measure. This is partly overcome with the use of software related quality attributes. Quality attributes are used to help guide the focus of testing and consist of characteristics that are considered typical to software development.

SQA – testing

Software quality assurance has the advantage that it is an understandable and recognizable proposition. To testers the appeal is that it provides a uniform solution for “all” situations and the handbook approach can be followed from the get go. To managers the appeal is that the procedures are recognizable in relation to mainstream management ideas and in combination with a phased interpretation of software development and testing, e.g. the V-Model, it is highly useful for project planning.

Following the process and filling templates alone (even) software quality assurance admits does not produce tests. So software quality assurance has teamed up with software engineering in that it uses test design techniques to identify and describe tests. In turn software quality assurance has developed means to limit the scope of testing with using a combination of quality attributes and risk assessment to order and if necessary limit the amount of tests to be designed and/or executed.


In practise the use of software quality assurance, together with test engineering, has admittedly helped to establish software testing as a craft of its own. It has put software testing on the map and in general software testing is seen as part of the development process. But the use of software quality assurance as such has been no guarantee in stopping the development of ‘bad’ software. With bad software I mean software that did not provide a solution for the problem it was intended for.

In response to this limit to success several lines of improvement have evolved. One of them is to quality assess the software quality assurance itself with the use of models like TMMi or Test Process Improvement. These might help to forge a smoother adherence to the standards, procedures and templates, but the emergence of better software seems to be more of a side effect then that is the result of these new models themselves.

A second response is to more stress that quality assurance of software is an independent activity performed in separate (QA) departments that kind of police software development. Although this helps to adhere to external regulations and audit rules it has little effect in the pursuit of building good software that solves the or is fit to be used for problem it is intended for.

These models, and others like them, have widened the gap between quality assurance through process and the act of testing software through skill. So much so that increasingly quality assurance and testing are considered as different activities that oppose each other. In my opinion we need both aspects of software development to do a good job. How we mix them however should depend on the context in which you apply them. But in whichever context the testing of software should have the purpose of delivering useful information about the product. To do this, in my opinion testing skills come first and then knowledge and use of standards or procedures follows to aid or enhance the act of delivering useful information.

What’s in a name – Part 1, I am an engineer

Ever since I started software testing I was surprised by the different names the test functions have ranging from say “Software Development Engineer in Test” to just “Tester”. At the time I did not pay much attention to this as testing to me seemed to be similar regardless of the title you have while doing it. Some time later however, after  listening to Stephen R. Covey’s 7 habits, I was reminded of these different titles. Covey mentions, in the Paradigm chapter, the shifts in paradigm that can come with a change in the title you carry. At that time my idea that all software testers basically performed the same activity had also changed and the subject of the title by which you test intermittently came back to my mind. And now some years later I have found both time and inspiration to use it as subject for a number of posts. In this small series of posts I will, start to, shape a model  of my perception that the title by which you perform your testing activities has consequences on how you and others perceive and approach your work in software testing.

This first post in the model relates to the origin of the computer itself.

The first computers were mostly a product of scientists, civil engineers (like Leibniz, Babbage, Atanasoff or Zuse), and others building more or less programmable hardware attempting to both solve and speed up the solution of calculations. The application specific machines that emerged out of this eventually not only calculated based on their physical configuration but also based on data and settings fed into them.

The design, implementation and modification of this data known as software eventually was, and still is, seen as an engineering activity known as Software Engineering. (The origin of the term is linked to a NATO report  in 1968)

This part of the model then represents the engineering paradigm.

In the engineering paradigm testing is seen as a specialisation or a sub-discipline of software engineering. In this paradigm software test engineers see themselves closely related to software engineering and by doing so favorably like to share the perception that they perform based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering. Like their counterparts their focus lies first and foremost on the functional correctness of the software engineering product, the code. The code itself and it’s syntax or grammar (patterns, coding practices, etc.) is however mostly left to the developers or programmers.

The software test engineers responsibility is to check whether or not the software:

  • Works; does the software or section of software work once installed
  • Contains all functionality; checking that all functionality that is described in the specifications is present
  • Is functionally correct; does the function do what is described in the specification of that function

For a lot of people then, whether or not involved in software development or even if involved in software testing themselves, the following thus describes what testing is about:

Verify that it works and is built as specified.

Obviously software testing has evolved since the early days of building computers using switches, relays and vacuum tubes. The consistent use of the engineer part in titles for software testing however signifies, in my opinion,  the implicit focus on the correctness and completeness of the software built rather than more user or quality attribute orientated views.