Test Types – D, E, F

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the third post in a (sub) series on Test Types. This post covers test types beginning with D, E and F. Please add any additions or remarks in the comment section.

Data integrity testing
Data integrity testing focusses on the verification and validation that the data within an application and its databases remains accurate, consistent and retained during its lifecycle while it is processed (CRUD), retrieved and stored.
I fear this is an area of testing that is often overlooked. While it gets some attention when functionality is tested initially, the attention on the behavior of data drops over time.

Dependency testing
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Destructive testing
Test to determine the softwares or its individual components or features point of failure when put under stress.
This seems very similar to Load Testing but I like the emphasis on individual stress points.

Development testing
It is an existing term with its own Wikipedia page but it doesn’t bring anything useful to software testing as such.

Documentation testing
Testing of documented information, definitions, requirements, procedures, results, logging, test cases, etc.

Dynamic testing
Testing the dynamic behavior of the software.
Almost all testing falls under this definition. So in practice a more specific identification of a test type should be chosen.

End-to-End testing
Testing the workflow of a single system or a chain of systems with regard to its inputs, outputs and processing  of these with regard to its availability, capacity, compatibility, connectivity,  continuity, interoperability, modularity, performance, reliability, robustness, scalability, security, supportability and traceability.
While in theory end-to-end testing seems simple. Enter some data and check that it is processed and handed over throughout all systems until the end of the chain. In practice end-to-end testing is very difficult. The long list of quality characteristics mentioned above serves as an indication of what could go wrong along the way.

Endurance testing
Endurance is testing the ability to handle continuous load under normal conditions and under difficult/unpleasant conditions over some longer period of duration/time.

Error handling testing
Use specific input or behavior to generate known, and possibly unknown, errors.
Documented error and of exception handling is a great source to use for test investigations. It shows often undocumented requirements and business logic. It also interesting to see if the exception and errors occur based upon the described situation.

Error guessing
Error guessing is based on the idea that experienced, intuitive or skillful tester are able to find bugs based on their abilities and that it can be used next the use of more formal techniques.
As such one could argue that it is not as much a test type as it is an approach to testing. 

Exploratory testing
Exploratory testing is a way of learning about and investigating a system through concurrent design, execution, evaluating, re-design and reporting of tests with the aim to find answers to currently known and currently as yet unknown question who’s answers enable individual stakeholders to take decisions about the system.
Exploratory testing is an inherently structural approach to testing that seeks to go beyond the obvious requirements and uses heuristics and oracles to determine test coverage areas and test ideas and to determine the (relative) value of test results. Exploratory testing is often executed on the basis of Session Based Test Management using charter based and time limited sessions.
It is noteworthy that, in theory at least, all of the test types mentioned in this series could be part of exploratory testing if deemed appropriate to use. 

Failover testing
Failover testing investigates the systems ability to successfully failover, recover or re-allocate resources from hardware, software or network malfunction such that no data is lost, data integrity is intact and no ongoing transactions fail.

Fault injection testing
Fault injection testing is a method in which hardware faults, compile time faults or runtime faults are ‘injected’ into the system to validate its robustness.

Functional testing
Functional testing is testing aimed to verify that the system functions according to its requirements.
There are many definitions of functional testing and the one above seems to capture most. Interestingly some definitions hint that testing should also aim at covering boundaries and failure paths even if not specifically mentioned in the requirements. Other mention design specifications, or written specifications. For me functional testing initially is conform the published requirements and than to investigate in which way this conformity could be broken. 

Fuzz testing
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks.
Yes this is more or less the Wikipedia definition. 

Regression Testing

As a follow up in the testing definition series it was my intention to continue with covering Test Types. Initial investigation showed what I had already feared. Such a post would become a Herculean task and probable my longest post ever. So I will continue with that particular endeavor sometime later tackling it one step at a time. This post for starters covers one of the most common but also one of the most peculiar types of testing

“Regression Testing”

Regression Testing is so common as a testing type that the majority of books about software testing, and agile for that matter, that I know, mention regression testing. Almost as common however is that most of them either or both do not tell what regression testing is or do not tell how one should actually go about and do regression testing. To be fair an exception to the latter is that quite a few, particularly the ones with an agile demeanor, tell that regression testing is done by having automated tests but that is hardly anymore informative is it.

Before I go further into regression testing as being peculiar first inline with the previous posts a list of regression testing definitions:

  • Checking that what has been corrected still works. (Bertrand Meyer; Seven Principles of Software Testing 2008)
  • Regression testing involves reuse of the same tests, so you can retest (with these) after change. (Cem Kaner, James Bach, Bret Pettichord; Lessons learned in Software Testing 2002)
  • Regression testing is done to make sure that a fix does what it’s supposed to do (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)
  • Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. (John E. Bentley; Software Testing Fundamentals Concepts, Roles, and Terminology 2005)
  • Retesting to detect faults introduced by modification (ISO/IEC/IEEE 24765:2010)
  • Saving test cases and running them again after changes to other components of the program (Glenford J. Myers; The art of software testing 2nd Edition 2004)
  • Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (ISO/IEC/IEEE 24765:2010)
  • Testing following modifications to a test item or to its operational environment, to identify whether regression failures occur (ISO/IEC/IEEE 29119-1:2013)
  • Testing if what was tested before still works (Egbert Bouman; SmarTEST 2008)
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. (Standard glossary of terms used in Software Testing Version 2.2, 2012)
  • Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects (ISO/IEC 90003:2014)
  • Tests to make sure that the change didn’t disturb anything else. Test the overall integrity of the program. (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)

Looking at the above definitions the general idea about regression testing seems to be:

“To ensure that except for the parts of the areas* that were intentionally changed no other parts of these areas or other areas of the software are impacted by those changes and that these still function and behave as before”.
(*Area is used here as a general expression for function, feature, component, or any other dimensional divisions of the subject under test that is used)

The peculiar thing now is that however useful and logical such a definition is it only provides the intention of this type, or should I say activity, of testing. Regression testing could still encompass any other testing type in practice.

To know what to do you first need to establish which areas are knowingly affected by the changes and then which areas have the most likelihood of being unknowingly affected by the change. Next to that there probably are areas in your software where you do not want to take the risk of them being affected by the changes. In his presentation at EuroSTAR in 2005 Peter Zimmerer addresses the consequences of this in his test design poster by pointing out that the wider you throw out your net for regression effects the larger the effort will be:

  • Parts which have been changed – 1
  • Parts which are influenced by the change – 2
  • Risky, high priority, critical parts – 3
  • Parts which are often used – 4
  • All – 5

Once you have identified the areas you want to regression test you still need to figure out how to test those areas for the potential impact of the change. The general idea to solve this, in theory at least, seems to be to rerun previous tests that cover these areas. As this might mean running numerous tests for lengthy periods of time many books and articles propose to run automated tests. This will however only work if there are automated tests to use for testing these areas to begin with. And even if there are you still need to evaluate the results of any failed test and there is no clear indication of how long that may take.

How do you know that these existing tests do test for the impact of the change? After all they were not designed to do so. For all you know they might or might not fail due to changes to the area that is tested by them. Either result could therefore be right or wrong in light of the changes. The test itself could be influenced by an impact of the change on the test (positive or negative) that was not considered or identified yet.

All in all regression testing is easily considered to be necessary, not so easy to determine, difficult to evaluate on success and considerably more work then many people think. Even so next to writing new tests it probably is the best to solution to check if changes bring about unwanted functionality or behavior in your software. My suggestion to you is to at least change the test data so that these existing tests have a better chance of finding new bugs.

Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claims Testing

  • The product should behave the way some document, artifact, or person says it should. The claim might be made in a specification, a Help file, an advertisement, an email message, or a hallway conversation, and the person or agency making the claim has to carry some degree of authority to make the claim stick. (Michael Bolton; Testing without a map, 2005)
  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)

Pilot

  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)