Test Types – A

This sixth post in the series of software testing overviews introduces the first of some 80+ different test types. That number itself is completely arbitrary. While searching for and investigating testing definitions I found over a hundred definitions of test types and I have chosen to leave out a number of ‘test types’. My choice to do so is based on the interpretation that some test types rather described a test level or a test technique and I could not see how to make a useful test type out of them.

So what then do I call a test type?

To me a test type is a particular subject of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

While going through my overview you might find that some of the test types I mention do not entirely fit the narrow description of a test type as provided above. The reason that they are mentioned in spite of this is that I felt that they were so often mentioned as a test type that they should have a mention in this post just for that reason.

This post differs somewhat from the earlier posts as the definitions used are often rewritten by me to form a single aggregate definiton as many different ones for the same term exist. Where useful I have added comments as additional information. Also since a post with over 80 descriptions would be too long I have split up the overview into alphabetical sections. To begin with the letter A.

Just as a reminder these are not necessarily my definitions but a collection of definitions I encountered. Finally there are no sources or attributions for the individual test types as this would make this a totally different exercise.

A/B testing

A/B testing originates from marketing research used to investigate the more efficient of two possible solutions by presenting them to two different groups and measuring the ‘profits’. In software testing it is mostly used to compare two versions of a program or website, often of which only one contains changes on a single or a few controllable criteria.

Acceptance testing
During acceptance testing requirements, variables, parts of a program or specific behaviour of a program is compared against measurable aspects of predetermined acceptance criteria. This requires at least four things.
First identification of the requirements, variables and parts or behaviour (coverage). Second expressing these in measurable aspects.
Third the aspects need to represent the defining elements of the acceptance criteria. Finally the acceptance criteria themselves should represent the needs and wants of the stakeholder. The goal of this interpretation of acceptance testing is to provide stakeholders the possibility to accept the software. And provide the possibility of sign-of.

A more pragmatic way to look at acceptance testing is to allow the stakeholders, often end users, to evaluate the software and see if it meets there expectations and (operational) needs. In practice this is often done by proxy by stakeholder representatives. Sometimes the testers are selected as the representatives. I do not think that is a good idea as testers then step outside of there boundary of information provider, are often commited to the solution and their role in creating it and more essentially testers are not the end users. 

Active testing
Testing the program by triggering actions and events in the program and studying the results. To be honest I do not consider this a test type as in my opinion this describes nearly all types of testing.

Ad-hoc testing

Ad-hoc testing is software testing performed without explicit prior planning or documentation on direction of the test, on how to test or on which oracles to use.
Some definitions see this as informal, unstructured and not reproducible. Informality and being unstructured (if seen as unprepared) is certainly true as this would be the point of doing it ad-hoc. The not reproducible part depends on whether you care to record the test progress and test results. Something that is in my opinion is not inherently attached to doing something ad-hoc but highly advisable. Why elso would you be testing if you are able to tell the testing story.

Age testing
It is a testing technique that evaluates a system’s ability to perform in the future. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing.
To be honest I found only one reference to this test type but I find the idea interesting. 

Agile testing
Agile testing is mentioned often as a test type or test approach but I have added no definition or description here. In my opinion agile testing is not a software test type. Agile testing rather is a particular context in which testing is performed that may have its particular challenges on test execution, on how tests are approached and on choices of test tooling but not a specific test type. 

Alpha testing
Alpha testing is an in-house (full) integration test of the near complete product that is executed by others than the development team but still is executed in a development environment. Alpha testing simulates the products intended use and helps catch design flaws and operational bugs.
One could argue that this is more of a test level than a test type. I care to view it as test type because it is more about the type of use and its potential to discover new information than that it is part of the software development itself. I specifically disagree with the idea that this is an intermediary step towards, or is part of, handing over software to a Test/QA group as some definitions propose. In my opinion testing is integrated right from the start of development up until it stops because the product ends its life-cycle or due to some other stopping heuristic. 

API testing
API testing involves testing individual or combined inputs, outputs and business rules of the API under investigation.
Essentially an API is a device-independent, or component-independent access provider that receives, interprets/transforms and sends messages so that different parts of a computer or programs can use each other’s operations and/or information. Testing an API is similar to testing in general albeit that an API has a smaller scope has, or should have, specific contracts and definitions that describe the API’s specific variables, value ranges and (business) rules. Testing an API should however not be limited to the API alone. Sources, destinations (end-points), web services (e.g. REST, SOAP), message types (e.g. JSON, XML), message formats (e.g. SWIFT, FIX, EDI, CSV), transport- (e.g. HTTP(S), JMS, MQ)  and communication protocols (e.g. TCP/IP, SMTP, MQTT, TIBCO Rendezvous) all influence the overall possibilities and functionality of the API in relation to the system(s) that use(s) the API. Typically API testing is semi- or fully automated and requires sufficient tool, message type, and transport- and communication protocol knowledge to be executed well.

An alternative meaning of API testing is testing by using an API. In this case the API is a means to an end in gaining access to the subject under test and feeding it with data or instructions and gathering responses.

Regression Testing

As a follow up in the testing definition series it was my intention to continue with covering Test Types. Initial investigation showed what I had already feared. Such a post would become a Herculean task and probable my longest post ever. So I will continue with that particular endeavor sometime later tackling it one step at a time. This post for starters covers one of the most common but also one of the most peculiar types of testing

“Regression Testing”

Regression Testing is so common as a testing type that the majority of books about software testing, and agile for that matter, that I know, mention regression testing. Almost as common however is that most of them either or both do not tell what regression testing is or do not tell how one should actually go about and do regression testing. To be fair an exception to the latter is that quite a few, particularly the ones with an agile demeanor, tell that regression testing is done by having automated tests but that is hardly anymore informative is it.

Before I go further into regression testing as being peculiar first inline with the previous posts a list of regression testing definitions:

  • Checking that what has been corrected still works. (Bertrand Meyer; Seven Principles of Software Testing 2008)
  • Regression testing involves reuse of the same tests, so you can retest (with these) after change. (Cem Kaner, James Bach, Bret Pettichord; Lessons learned in Software Testing 2002)
  • Regression testing is done to make sure that a fix does what it’s supposed to do (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)
  • Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. (John E. Bentley; Software Testing Fundamentals Concepts, Roles, and Terminology 2005)
  • Retesting to detect faults introduced by modification (ISO/IEC/IEEE 24765:2010)
  • Saving test cases and running them again after changes to other components of the program (Glenford J. Myers; The art of software testing 2nd Edition 2004)
  • Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (ISO/IEC/IEEE 24765:2010)
  • Testing following modifications to a test item or to its operational environment, to identify whether regression failures occur (ISO/IEC/IEEE 29119-1:2013)
  • Testing if what was tested before still works (Egbert Bouman; SmarTEST 2008)
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. (Standard glossary of terms used in Software Testing Version 2.2, 2012)
  • Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects (ISO/IEC 90003:2014)
  • Tests to make sure that the change didn’t disturb anything else. Test the overall integrity of the program. (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)

Looking at the above definitions the general idea about regression testing seems to be:

“To ensure that except for the parts of the areas* that were intentionally changed no other parts of these areas or other areas of the software are impacted by those changes and that these still function and behave as before”.
(*Area is used here as a general expression for function, feature, component, or any other dimensional divisions of the subject under test that is used)

The peculiar thing now is that however useful and logical such a definition is it only provides the intention of this type, or should I say activity, of testing. Regression testing could still encompass any other testing type in practice.

To know what to do you first need to establish which areas are knowingly affected by the changes and then which areas have the most likelihood of being unknowingly affected by the change. Next to that there probably are areas in your software where you do not want to take the risk of them being affected by the changes. In his presentation at EuroSTAR in 2005 Peter Zimmerer addresses the consequences of this in his test design poster by pointing out that the wider you throw out your net for regression effects the larger the effort will be:

  • Parts which have been changed – 1
  • Parts which are influenced by the change – 2
  • Risky, high priority, critical parts – 3
  • Parts which are often used – 4
  • All – 5

Once you have identified the areas you want to regression test you still need to figure out how to test those areas for the potential impact of the change. The general idea to solve this, in theory at least, seems to be to rerun previous tests that cover these areas. As this might mean running numerous tests for lengthy periods of time many books and articles propose to run automated tests. This will however only work if there are automated tests to use for testing these areas to begin with. And even if there are you still need to evaluate the results of any failed test and there is no clear indication of how long that may take.

How do you know that these existing tests do test for the impact of the change? After all they were not designed to do so. For all you know they might or might not fail due to changes to the area that is tested by them. Either result could therefore be right or wrong in light of the changes. The test itself could be influenced by an impact of the change on the test (positive or negative) that was not considered or identified yet.

All in all regression testing is easily considered to be necessary, not so easy to determine, difficult to evaluate on success and considerably more work then many people think. Even so next to writing new tests it probably is the best to solution to check if changes bring about unwanted functionality or behavior in your software. My suggestion to you is to at least change the test data so that these existing tests have a better chance of finding new bugs.

Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claims Testing

  • The product should behave the way some document, artifact, or person says it should. The claim might be made in a specification, a Help file, an advertisement, an email message, or a hallway conversation, and the person or agency making the claim has to carry some degree of authority to make the claim stick. (Michael Bolton; Testing without a map, 2005)
  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)

Pilot

  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)