A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.
This is the fifth post in a (sub) series on Test Types. Please add any additions or remarks in the comment section.
Keyword-driven testing is a methodology used for both manual and automated testing but is best known in combination with automated testing. In this method of testing the documentation and design of testing is separated from the execution. Keywords or action words are defined for each of the actions to be executed. These words are then used in a pre-defined format (often a table) to enable a combination of actions to be executed (and possibly evaluated) by either a person or a previously set up test automation framework.
Load testing is the process of putting demand on a software system or computing device and measuring its response.
Although closely related to stress testing load testing has the aim to measure behaviour for different loads and not necessarily for peak loads.
Localization is the process of adjusting (internationalized) software for a specific region or language by adding locale-specific components and translating text.
Localization testing then is performing checks and validation that the translation and locale-specific components are functionally correct and understandable.
Manual scripted testing
Manual scripted testing is the process of manually executing previously designed test scripts in search for software behaviour that does not match the behaviour as described in the script.
On paper this type of software testing is still relatively popular as it is based on the early descriptions of structured testing and as such appeals to non-testers. A fair number of testing practitioners however judge executing test scripts as inefficient, ineffective and boring.
Manual support testing
Oddly enough there are two definitions in circulation for this.
“Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data for automated system.”
This one I believe should be discarded as being part of test automation preparation. Although I have to admit that at the start of test automation this activity is often under estimated. The second one I think is much more useful.
Testing manual support systems involves all the functions performed by people in preparing data for, and using data from, automated applications. The objectives of testing the manual support systems are to verify that the manual support procedures are documented and complete, support responsibility has been assigned, determine that the manual support people are adequately trained and to determine that the manual support and the automated segment are properly interfaced.
Memory profiling is the process of investigating and analyzing a program’s behavior to determine how to optimize the program’s memory usage.
Migration testing is the activity of testing data, functionality or behaviour of software after migration to a new platform, database or program for intended and unintended differences.
Model based testing
Model based testing is the automatic generation of software test procedures, using models of system requirements and behavior. Once created these test procedures can be run repeatedly.
Model based testing has been around for several years now but has never really made a big impact on testing as such. The reasons for this I believe are that the “automatic generation” and repeatability of the tests require a considerable technical- and time investment and are not nearly as easily achieved as portrait by model based testing tool vendors. Next to this the success and quality of the tests is highly dependent on the ability to formulate the software behaviour into a model and on the quality of the then created model itself. In practice model based testing will find bugs but not as many as often promised and certainly not all.
Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. (Wikipedia)