A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.
This is the second post in a (sub) series on Test Types. This post cover test types beginning with B and C.
Backward compatibility testing
This testing can be done on several levels. On platform level it means to investigate if an Application/Product developed using a previous version of a platform should still work in a newer version of a platform.
On document level it means to investigate if a document created with a previous version of the product should still works on the new version of the product.
On feature level it means to investigate if input, usage and result of a feature in the previous version compares to input, usage and result in the newer version.
Benchmark testing is the act of running (parts of the) software, in some circumstance, to assess its relative performance in comparison to an existing ‘benchmark’.
Beta testing testing is the follow up of alpha testing. During beta testing the software is released outside of the development team and offered to (limited) groups of people within the same organization and possibly a limited set of (potential) end-users. Beta testing offers the possibility for the software to engage in real world scenarios and receive early feedback with regard to bugs, use-ability, competiveness etc.
Big bang integration testing
In Big Bang integration testing all components or modules are integrated simultaneously, after which everything is tested as a whole.
I am not a big fan of this type of integration as it is very difficult to isolate causes of failures and bugs. Also test coverage is only high level. It interesting however to see that, while not on purpose, this type of testing is often implicitly done when continuous integration is implemented and the integration interval, and thus the test interval, is daily or parts of a day.
Black box testing
In science, computing, and engineering, a black box is a device, system or object which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is “opaque” (black). Hence black box testing can be describes as approaching testing without taking any knowledge of the internal workings of the software into account.
Personally I do believe that software testing has progressed beyond the notion of believing that one could do with mere black box testing. I do think that it can be valuable approach alongside grey box- or white box testing. Although in practice I never seem to make that distinction anymore when I define a test approach.
Bottom up integration testing
Bottom up integration testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Change related testing
Execution of tests that are related to the code and/or functional changes.
Some view this as being regression testing. While this is probably part of change related testing it potentially ignores the tests that are targeted on confirming and validating the changes as such.
Compatibility testing is testing executed to determine whether or not the software is compatible to run on:
- Different types of hardware
- Different browser
- Different devices
- Different networks
- Different versions
- Different operating systems
- Different configurations
It is basically the testing of the application or the product built with its (intended) environments.
Competitive analysis testing
Testing targeted on how sales or customer experience are impacted:
- If a feature or functionality is not available
- If a feature or functionality is not working
- If a feature or functionality is not available but is available at a competitor
- If a feature or functionality is not working but is working at a competitor
- If a feature or functionality is available and working but a competitor has a different version (cooler / less cool; cheaper / more expensive; faster / slower; etc.)
While this may seem more marketing than testing (context driven) testers are often at the front runners in identifying these differences.
Testing focussed to establish if the software is compliant to known criteria such as:
- Open standards (to ensure inter operability with other software or interfaces using the same open standard)
- Internal processes and standards
- International standards (e.g. IEEE, ISO)
- Regulatory standards
Testing of separate software components without integrating them to other components.
A component can be any portion of the system under test (e.g. module, program, class, method, etc.). It is recommendable to select components of a similar abstraction level.
This is two very different meanings:
- Testing throughout an iteration, sprint or development cycle concurrent with all other development activities. Whereas concurrent means both simultaneously, cooperative and collaborative.
- Testing activity that determines the stability of a system or application under test during normal activity.
Conformance (or conformity) testing
I see three meanings of conformance testing:
- Testing meant to provide the users of conforming products some assurance or confidence that the product behaves as expected, performs functions in a known manner, or has an interface or format that is known.
- Testing to determine whether a product or system or just a medium complies with the requirements of a specification, contract or regulation.
- A synonym of compliance testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.