Test Types – D, E, F

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the third post in a (sub) series on Test Types. This post covers test types beginning with D, E and F. Please add any additions or remarks in the comment section.

Data integrity testing
Data integrity testing focusses on the verification and validation that the data within an application and its databases remains accurate, consistent and retained during its lifecycle while it is processed (CRUD), retrieved and stored.
I fear this is an area of testing that is often overlooked. While it gets some attention when functionality is tested initially, the attention on the behavior of data drops over time.

Dependency testing
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Destructive testing
Test to determine the softwares or its individual components or features point of failure when put under stress.
This seems very similar to Load Testing but I like the emphasis on individual stress points.

Development testing
It is an existing term with its own Wikipedia page but it doesn’t bring anything useful to software testing as such.

Documentation testing
Testing of documented information, definitions, requirements, procedures, results, logging, test cases, etc.

Dynamic testing
Testing the dynamic behavior of the software.
Almost all testing falls under this definition. So in practice a more specific identification of a test type should be chosen.

End-to-End testing
Testing the workflow of a single system or a chain of systems with regard to its inputs, outputs and processing  of these with regard to its availability, capacity, compatibility, connectivity,  continuity, interoperability, modularity, performance, reliability, robustness, scalability, security, supportability and traceability.
While in theory end-to-end testing seems simple. Enter some data and check that it is processed and handed over throughout all systems until the end of the chain. In practice end-to-end testing is very difficult. The long list of quality characteristics mentioned above serves as an indication of what could go wrong along the way.

Endurance testing
Endurance is testing the ability to handle continuous load under normal conditions and under difficult/unpleasant conditions over some longer period of duration/time.

Error handling testing
Use specific input or behavior to generate known, and possibly unknown, errors.
Documented error and of exception handling is a great source to use for test investigations. It shows often undocumented requirements and business logic. It also interesting to see if the exception and errors occur based upon the described situation.

Error guessing
Error guessing is based on the idea that experienced, intuitive or skillful tester are able to find bugs based on their abilities and that it can be used next the use of more formal techniques.
As such one could argue that it is not as much a test type as it is an approach to testing. 

Exploratory testing
Exploratory testing is a way of learning about and investigating a system through concurrent design, execution, evaluating, re-design and reporting of tests with the aim to find answers to currently known and currently as yet unknown question who’s answers enable individual stakeholders to take decisions about the system.
Exploratory testing is an inherently structural approach to testing that seeks to go beyond the obvious requirements and uses heuristics and oracles to determine test coverage areas and test ideas and to determine the (relative) value of test results. Exploratory testing is often executed on the basis of Session Based Test Management using charter based and time limited sessions.
It is noteworthy that, in theory at least, all of the test types mentioned in this series could be part of exploratory testing if deemed appropriate to use. 

Failover testing
Failover testing investigates the systems ability to successfully failover, recover or re-allocate resources from hardware, software or network malfunction such that no data is lost, data integrity is intact and no ongoing transactions fail.

Fault injection testing
Fault injection testing is a method in which hardware faults, compile time faults or runtime faults are ‘injected’ into the system to validate its robustness.

Functional testing
Functional testing is testing aimed to verify that the system functions according to its requirements.
There are many definitions of functional testing and the one above seems to capture most. Interestingly some definitions hint that testing should also aim at covering boundaries and failure paths even if not specifically mentioned in the requirements. Other mention design specifications, or written specifications. For me functional testing initially is conform the published requirements and than to investigate in which way this conformity could be broken. 

Fuzz testing
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks.
Yes this is more or less the Wikipedia definition. 


Test Types – B, C

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the second post in a (sub) series on Test Types. This post cover test types beginning with B and C.

Backward compatibility testing
This testing can be done on several levels. On platform level it means to investigate if an Application/Product developed using a previous version of a platform should still work in a newer version of a platform.
On document level it means to investigate if a document created with a previous version of the product should still works on the new version of the product.
On feature level it means to investigate if input, usage and result of a feature in the previous version compares to input, usage and result in the newer version.

Benchmark testing
Benchmark testing is the act of running (parts of the) software, in some circumstance, to assess its relative performance in comparison to an existing ‘benchmark’.

Beta testing
Beta testing testing is the follow up of alpha testing. During beta testing the software is released outside of the development team and offered to (limited) groups of people within the same organization and possibly a limited set of (potential) end-users. Beta testing offers the possibility for the software to engage in real world scenarios and receive early feedback with regard to bugs, use-ability, competiveness etc.

Big bang integration testing
In Big Bang integration testing all components or modules are integrated simultaneously, after which everything is tested as a whole.
I am not a big fan of this type of integration as it is very difficult to isolate causes of failures and bugs. Also test coverage is only high level. It interesting however to see that, while not on purpose, this type of testing is often implicitly done when continuous integration is implemented and the integration interval, and thus the test interval, is daily or parts of a day.

Black box testing
In science, computing, and engineering, a black box is a device, system or object which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is “opaque” (black). Hence black box testing can be describes as approaching testing without taking any knowledge of the internal workings of the software into account.
Personally I do believe that software testing has progressed beyond the notion of believing that one could do with mere black box testing. I do think that it can be valuable approach alongside grey box- or white box testing. Although in practice I never seem to make that distinction anymore when I define a test approach.

Bottom up integration testing
Bottom up integration testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Change related testing
Execution of tests that are related to the code and/or functional changes.
Some view this as being regression testing. While this is probably part of change related testing it potentially ignores the tests that are targeted on confirming and validating the changes as such.

Compatibility testing
Compatibility testing is testing executed to determine whether or not the software is compatible to run on:

    • Different types of hardware
    • Different browser
    • Different devices
    • Different networks
    • Different versions
    • Different operating systems
    • Different configurations
    • Etc.

It is basically the testing of the application or the product built with its (intended) environments.

Competitive analysis testing
Testing targeted on how sales or customer experience are impacted:

  • If a feature or functionality is not available
  • If a feature or functionality is not working
  • If a feature or functionality is not available but is available at a competitor
  • If a feature or functionality is not working but is working at a competitor
  • If a feature or functionality is available and working but a competitor has a different version (cooler / less cool; cheaper / more expensive; faster / slower; etc.)

While this may seem more marketing than testing (context driven) testers are often at the front runners in identifying these differences.

Compliance testing
Testing focussed to establish if the software is compliant to known criteria such as:

  • Open standards (to ensure inter operability with other software or interfaces using the same open standard)
  • Internal processes and standards
  • International standards (e.g. IEEE, ISO)
  • Regulatory standards

Component testing
Testing of separate software components without integrating them to other components.
A component can be any portion of the system under test (e.g. module, program, class, method, etc.). It is recommendable to select components of a similar abstraction level.

Concurrent testing
This is two very different meanings:

  • Testing throughout an iteration, sprint or development cycle concurrent with all other development activities. Whereas concurrent means both simultaneously, cooperative and collaborative.
  • Testing activity that determines the stability of a system or application under test during normal activity.

Conformance (or conformity) testing
I see three meanings of conformance testing:

  • Testing meant to provide the users of conforming products some assurance or confidence that the product behaves as expected, performs functions in a known manner, or has an interface or format that is known.
  • Testing to determine whether a product or system or just a medium complies with the requirements of a specification, contract or regulation.
  • A synonym of compliance testing

Conversion testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.