Test Types – N, O, P

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the sixth post in a (sub) series on Test Types. Please add any additions or remarks in the comment section.

Negative testing
Negative testing ensures that your application can gracefully handle invalid input or unexpected user behavior. It is the process of validating the application against invalid data.
Although most testers will agree on the importance of negative testing it sometimes is difficult to put the necessary time into it. Especially when many developer or business stakeholder argue against too many “hypothetical” tests in favor of confirmation tests

Neighbour testing
Testing the connectivity and data exchange between the application and its direct ‘neighbours’.
A colleague of mine used this type of testing in a Chain Test Plan that he wrote. I found it a useful expression to describe limiting the (initial) scope of a chain test to its immediate surroundings.

Network conditions simulation testing
Testing traffic, exchange of data and behaviour of systems and interfaces while simulating different network conditions.
Think of introducing latency, interruptions, data package loss etc.

Non-functional testing
Testing how a software application or system operates.
A very short description that covers a world of possibilities. While behaviour is the subject of many requirements documents, user stories and the like, non-functionals are much less intensively described. Often they are only identified by a number of constraints and boundaries describing what the system should be able to handle or what is specifically not allowed to do. Don’t be fooled by their absence. Non-functionals have the possibility of influencing the behaviour of applications so that it looks as if the behaviour is incorrect or correct while it is not. Moreover non-functionals can have a major impact on user experience. There are many quality attributes and testing types connected to non-functional testing that can be used for investigation.

Operational testing
Operational testing is testing focussed on usage of the application or system in its intended environment, its intended usage and by its intended users.
Environment here extends beyond the hardware, tooling or user interface and such. It includes applicable standards, procedures, processes and culture. Similarly for users this not only relates to consumers or end users but also to operations, customer service, engineers, etc. And usage extends beyond day-to-day use but also relates to maintenance, backup and restore, etc.

Packaged application testing
Testing application packages or application suites on the inter functionality of the separate parts in their interaction and their behaviour as a whole.
There are a  number of suppliers offering package application testing services for products like SAP and Oracle. Although I see the benefit of testing process flow and behaviour throughout such a group of applications that more or less work together I am not quite sure how this differs from any other testing.

Penetration testing
Investigating a system and its environment on the possibilities of gaining unwanted access to a environment or system or the possibility to retrieve not be disclosed information.
Penetration testing together with performance testing (see next item) is in my opinion one of the specialist fields in software testing. While every testing should have some basic understanding and skill in this area it involves both continuous renewed skill, knowledge and practice to excel in penetration testing. 

Performance testing
Determining how a system performs in terms of responsiveness and stability under various work loads.
While the focus is often particularly on high loads or peak loads special attention should also be given to low loads, intermittent loads, loads addressing only specific parts of the system. Also load should point towards both load given to the system as load produced by the system.
There are several sub-areas that fall under performance testing that each have a specific area of performance that they address: Load testing, Stress testing, Soak testing, Configuration testing, Spike testing, Isolation testing, etc..

Test Types – J, K, L, M

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the fifth post in a (sub) series on Test Types. Please add any additions or remarks in the comment section.

Keyword-driven testing

Keyword-driven testing is a methodology used for both manual and automated testing but is best known in combination with automated testing. In this method of testing the documentation and design of testing is separated from the execution. Keywords or action words are defined for each of the actions to be executed. These words are then used in a pre-defined format (often a table) to enable a combination of actions to be executed (and possibly evaluated) by either a person or a previously set up test automation framework.

Load testing

Load testing is the process of putting demand on a software system or computing device and measuring its response.
Although closely related to stress testing load testing has the aim to measure behaviour for different loads and not necessarily for peak loads.

Localization testing

Localization is the process of adjusting (internationalized) software for a specific region or language by adding locale-specific components and translating text.
Localization testing then is performing checks and validation that the translation and locale-specific components are functionally correct and understandable.

Manual scripted testing

Manual scripted testing is the process of manually executing previously designed test scripts in search for software behaviour that does not match the behaviour as described in the script.
On paper this type of software testing is still relatively popular as it is based on the early descriptions of structured testing and as such appeals to non-testers. A fair number of testing practitioners however judge executing test scripts as inefficient, ineffective and boring.

Manual support testing

Oddly enough there are two definitions in circulation for this.
“Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data for automated system.
This one I believe should be discarded as being part of test automation preparation. Although I have to admit that at the start of test automation this activity is often under estimated. The second one I think is much more useful.
Testing manual support systems involves all the functions performed by people in preparing data for, and using data from, automated applications. The objectives of testing the manual support systems are to verify that the manual support procedures are documented and complete, support responsibility has been assigned, determine that the manual support people are adequately trained and to determine that the manual support and the automated segment are properly interfaced.

Memory profiling

Memory profiling is the process of investigating and analyzing a program’s behavior to determine how to optimize the program’s memory usage.

Migration testing

Migration testing is the activity of testing data, functionality or behaviour of software after migration to a new platform, database or program for intended and unintended differences.

Model based testing

Model based testing is the automatic generation of software test procedures, using models of system requirements and behavior. Once created these test procedures can be run repeatedly.
Model based testing has been around for several years now but has never really made a big impact on testing as such. The reasons for this I believe are that the “automatic generation” and repeatability of the tests require a considerable technical- and time investment and are not nearly as easily achieved as portrait by model based testing tool vendors. Next to this the success and quality of the tests is highly dependent on the ability to formulate the software behaviour into a model and on the quality of the then created model itself. In practice model based testing will find bugs but not as many as often promised and certainly not all.

Mutation testing

Mutation testing (or Mutation analysis or Program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. (Wikipedia)

Test Types – G, H, I

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the fourth post in a (sub) series on Test Types. Please add any additions or remarks in the comment section.

Glass box testing
Testing or test design using knowledge of the details of the internals of the program (code and data) (BBST definition)
Too many this might sound familiar to White box testing. I think however that the perspective is quite different. Here the perspective is from a tester rather than a developer.

Gorilla testing
Gorilla testing is testing on particular module, component or functionality heavily and with large variety.

Grey box testing
Grey box testing is testing with, limited, knowledge and access to the code and internal structure and details of the software.
It is my opinion that most testers, safe pure programmers and pure black box tester, are doing grey box testing. As such they stand somewhere on a scale between the two extremes. Their place is sometimes determined by (enforced) choices of role or scope but more often by their programming and design knowledge and capabilities.

GUI testing
Graphical User Interface testing is testing the application’s user interface to detect if the functionality of the interface itself and the functionality that is directly influenced or dependent by the user interface functions correctly.
With the current growing attention to automated testing, and especially to testing on API level, the GUI is quick to be overlooked in its importance. It remains however know more than ever the first point of contact of any user with the system.

Incremental integration testing
Incremental integration testing is an approach in which you first test each module of component individually and then add them one by one together and test the integration. You can do this top down, bottom up or functionally incremental.

Installation testing
One side of installation testing is aimed at ensuring that all the necessary components are installed properly and are working as required once installed. The other side of installation testing focusses on what users need to do to install and set up new software successfully.

Integration testing
Integration testing is testing where, previously tested, individual modules and components are combined and tested as a group. It tests not only interactions between individual components but also between different sets of components and parts of the system within its direct environment. Integration testing focusses on different aspects such as functionality, performance, design and reliability.

Inter systems testing
Inter systems testing focusses at testing on interconnection and integration points of separate systems but working together.

Interface testing
Interface Testing is performed to evaluate whether systems or components pass data and control correctly to one another. It is to verify if all the interactions between these modules are working properly and errors are handled properly.

Internationalization testing
Internationalization is designing software systems in such a way that they can be adapted to different languages and regions without engineering changes, loss of functionality, loss of data or integrity issues. Internationalization testing is aimed at uncovering these potential problems.

Test Types – D, E, F

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the third post in a (sub) series on Test Types. This post covers test types beginning with D, E and F. Please add any additions or remarks in the comment section.

Data integrity testing
Data integrity testing focusses on the verification and validation that the data within an application and its databases remains accurate, consistent and retained during its lifecycle while it is processed (CRUD), retrieved and stored.
I fear this is an area of testing that is often overlooked. While it gets some attention when functionality is tested initially, the attention on the behavior of data drops over time.

Dependency testing
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Destructive testing
Test to determine the softwares or its individual components or features point of failure when put under stress.
This seems very similar to Load Testing but I like the emphasis on individual stress points.

Development testing
It is an existing term with its own Wikipedia page but it doesn’t bring anything useful to software testing as such.

Documentation testing
Testing of documented information, definitions, requirements, procedures, results, logging, test cases, etc.

Dynamic testing
Testing the dynamic behavior of the software.
Almost all testing falls under this definition. So in practice a more specific identification of a test type should be chosen.

End-to-End testing
Testing the workflow of a single system or a chain of systems with regard to its inputs, outputs and processing  of these with regard to its availability, capacity, compatibility, connectivity,  continuity, interoperability, modularity, performance, reliability, robustness, scalability, security, supportability and traceability.
While in theory end-to-end testing seems simple. Enter some data and check that it is processed and handed over throughout all systems until the end of the chain. In practice end-to-end testing is very difficult. The long list of quality characteristics mentioned above serves as an indication of what could go wrong along the way.

Endurance testing
Endurance is testing the ability to handle continuous load under normal conditions and under difficult/unpleasant conditions over some longer period of duration/time.

Error handling testing
Use specific input or behavior to generate known, and possibly unknown, errors.
Documented error and of exception handling is a great source to use for test investigations. It shows often undocumented requirements and business logic. It also interesting to see if the exception and errors occur based upon the described situation.

Error guessing
Error guessing is based on the idea that experienced, intuitive or skillful tester are able to find bugs based on their abilities and that it can be used next the use of more formal techniques.
As such one could argue that it is not as much a test type as it is an approach to testing. 

Exploratory testing
Exploratory testing is a way of learning about and investigating a system through concurrent design, execution, evaluating, re-design and reporting of tests with the aim to find answers to currently known and currently as yet unknown question who’s answers enable individual stakeholders to take decisions about the system.
Exploratory testing is an inherently structural approach to testing that seeks to go beyond the obvious requirements and uses heuristics and oracles to determine test coverage areas and test ideas and to determine the (relative) value of test results. Exploratory testing is often executed on the basis of Session Based Test Management using charter based and time limited sessions.
It is noteworthy that, in theory at least, all of the test types mentioned in this series could be part of exploratory testing if deemed appropriate to use. 

Failover testing
Failover testing investigates the systems ability to successfully failover, recover or re-allocate resources from hardware, software or network malfunction such that no data is lost, data integrity is intact and no ongoing transactions fail.

Fault injection testing
Fault injection testing is a method in which hardware faults, compile time faults or runtime faults are ‘injected’ into the system to validate its robustness.

Functional testing
Functional testing is testing aimed to verify that the system functions according to its requirements.
There are many definitions of functional testing and the one above seems to capture most. Interestingly some definitions hint that testing should also aim at covering boundaries and failure paths even if not specifically mentioned in the requirements. Other mention design specifications, or written specifications. For me functional testing initially is conform the published requirements and than to investigate in which way this conformity could be broken. 

Fuzz testing
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks.
Yes this is more or less the Wikipedia definition. 

Test Types – B, C

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the second post in a (sub) series on Test Types. This post cover test types beginning with B and C.

Backward compatibility testing
This testing can be done on several levels. On platform level it means to investigate if an Application/Product developed using a previous version of a platform should still work in a newer version of a platform.
On document level it means to investigate if a document created with a previous version of the product should still works on the new version of the product.
On feature level it means to investigate if input, usage and result of a feature in the previous version compares to input, usage and result in the newer version.

Benchmark testing
Benchmark testing is the act of running (parts of the) software, in some circumstance, to assess its relative performance in comparison to an existing ‘benchmark’.

Beta testing
Beta testing testing is the follow up of alpha testing. During beta testing the software is released outside of the development team and offered to (limited) groups of people within the same organization and possibly a limited set of (potential) end-users. Beta testing offers the possibility for the software to engage in real world scenarios and receive early feedback with regard to bugs, use-ability, competiveness etc.

Big bang integration testing
In Big Bang integration testing all components or modules are integrated simultaneously, after which everything is tested as a whole.
I am not a big fan of this type of integration as it is very difficult to isolate causes of failures and bugs. Also test coverage is only high level. It interesting however to see that, while not on purpose, this type of testing is often implicitly done when continuous integration is implemented and the integration interval, and thus the test interval, is daily or parts of a day.

Black box testing
In science, computing, and engineering, a black box is a device, system or object which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is “opaque” (black). Hence black box testing can be describes as approaching testing without taking any knowledge of the internal workings of the software into account.
Personally I do believe that software testing has progressed beyond the notion of believing that one could do with mere black box testing. I do think that it can be valuable approach alongside grey box- or white box testing. Although in practice I never seem to make that distinction anymore when I define a test approach.

Bottom up integration testing
Bottom up integration testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Change related testing
Execution of tests that are related to the code and/or functional changes.
Some view this as being regression testing. While this is probably part of change related testing it potentially ignores the tests that are targeted on confirming and validating the changes as such.

Compatibility testing
Compatibility testing is testing executed to determine whether or not the software is compatible to run on:

    • Different types of hardware
    • Different browser
    • Different devices
    • Different networks
    • Different versions
    • Different operating systems
    • Different configurations
    • Etc.

It is basically the testing of the application or the product built with its (intended) environments.

Competitive analysis testing
Testing targeted on how sales or customer experience are impacted:

  • If a feature or functionality is not available
  • If a feature or functionality is not working
  • If a feature or functionality is not available but is available at a competitor
  • If a feature or functionality is not working but is working at a competitor
  • If a feature or functionality is available and working but a competitor has a different version (cooler / less cool; cheaper / more expensive; faster / slower; etc.)

While this may seem more marketing than testing (context driven) testers are often at the front runners in identifying these differences.

Compliance testing
Testing focussed to establish if the software is compliant to known criteria such as:

  • Open standards (to ensure inter operability with other software or interfaces using the same open standard)
  • Internal processes and standards
  • International standards (e.g. IEEE, ISO)
  • Regulatory standards

Component testing
Testing of separate software components without integrating them to other components.
A component can be any portion of the system under test (e.g. module, program, class, method, etc.). It is recommendable to select components of a similar abstraction level.

Concurrent testing
This is two very different meanings:

  • Testing throughout an iteration, sprint or development cycle concurrent with all other development activities. Whereas concurrent means both simultaneously, cooperative and collaborative.
  • Testing activity that determines the stability of a system or application under test during normal activity.

Conformance (or conformity) testing
I see three meanings of conformance testing:

  • Testing meant to provide the users of conforming products some assurance or confidence that the product behaves as expected, performs functions in a known manner, or has an interface or format that is known.
  • Testing to determine whether a product or system or just a medium complies with the requirements of a specification, contract or regulation.
  • A synonym of compliance testing

Conversion testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Reference test

Regression Testing

A while ago a new developer approached me to discuss some changes he wanted to deploy to the test environment. He had re-factored parts of the application workflow management and redesigned a number of reports. We talked about it and he stressed that these where major changes and a lot of work to test. While we continued discussing I commented “that’s okay it will take some time but we have a regression test set just for this that we can use to work with and that will help us…

He looked at me with total bewilderment and uttered: “Didn’t you get anything of what I said. These are major changes. No regression test will cover that! They will all fail!!

I looked at him and realized…

He was right

His understanding is based on a common perception where regression tests mean something like running the same step by step tests as before to see that they have the same step by step results as before. And if that were the case he would obviously be right. There was no way I could execute the tests the same way as before simply because the workflow method and GUI functionality had changed dramatically. Nor could the results be exactly the same given the design changes he had made.

But I was right too

His reaction made me realize that over the years my understanding of a regression test set has changed dramatically. I don’t see test cases as something that provides a step by step description. I see test cases as a doable and executable follow-up of a test idea. A test case to me consists of the following elements:

  • A test idea; what do you want to learn, verify/validate or investigate?
  • A feature; which part of the test object do you investigate?
  • Test activities; a way to exercise your test idea and see how the software behaves
  • Trigger data; data that should trigger the behavior you want to see
  • An oracle; something that tells you how to value the behaviors information/data

With that understanding I looked at the changes and had concluded that the existing test cases were still useful. Even if the workflow management was re-factored I could still apply the test ideas we had used before. In essence the same features were affected and my oracles should still be valid. To me adjusting the test activities and trigger data, in this case to match the workflow changes, is something I tend to do anyway. With these adjustments I try to make re-running test cases more effective as variation adds new chances of finding new or different bugs.

With the reports the differentiation was slightly different. Test ideas, features, test activities and trigger data could stay more or less the same but my oracles, the current template reports, had changed. The reports now had a new layout but their content had changed only minimally.

Reference test

To avoid future confusion I came up with a new name for my tests which, as far as I am aware, has not been used in software testing before. I will call these tests Reference Tests.

A Reference Test then is a test where under similar circumstances similar input results in a similar outcome of the test if evaluated against an identical oracle

I am aware that ‘similar’ and ‘identical’ in this definition are relative concepts and I have chosen them on purpose. It expresses that each time that such a test is used the tester needs to be aware which information it tries to show or uncover, how it is doing that and to what purpose. This discourages mindless repetition or pass/fail blindness. It encourages thoughtful selection and execution of tests and deliberate evaluation of test results.





Exploratory Testing is not the antithesis of structured testing

I am getting tired of fellow software testers telling, stating, writing that exploratory testers do not like structured testing, or that exploratory testing is the opposite of structured testing, or that exploratory testing can only be done based on experience or domain knowledge, and on and on….

Exploratory testing is, like ‘traditional testing’, not also based on risk analysis, requirements, quality attributes and test design techniques. It does not ignore or oppose these approaches. Exploratory testing just also reaches beyond that and is also based on modeling, using oracles, using heuristics, time management, providing stakeholder relevant information, and much more.

Exploratory testing doesn’t spent unnecessary time on writing specific stepwise test cases in advance. It rather works with test ideas which it critically investigates while keeping an open mind on what is observed during execution. Exploratory testing then uses the information to create new and additional test ideas, change direction or raise bugs. But it always aims to use the results to provide relevant information to stakeholders that enables them to take decisions or meet their targets. And that can be a verbal account or a brief note but is more likely to be stakeholder specific test execution accounts, showing test results related to achieving the stakeholders acceptance criteria, (business) goals and mission It accounts how much is done, could not be done and how much still should/needs to be done both in terms of progress and coverage.

Exploratory testing is no free pass to click around in the software. Exploratory testing is both highly structured and flexible and it is flexible enough to change along the way so it can provide the most value information possible to the stakeholders.


Exploratory Testing is not the antithesis of structured testing

To do exploratory testing well you have to work structured, disciplined and flexible at the same time. That’s what makes exploratory testing hard to do but lots of fun at the same time.

You don’t just have to take my word for it. Many have written about it before, see some examples below, but the best way to get convinced is to learn and experience it. So I challenge you to go out and do it seriously and with engagement. If you don’t know how many colleagues or myself are more than happy to show you.

Further reading


Best book:

And just to prevent wrong ideas.
Test ideas can, depending on context, be more or less detailed and almost look like scripts even while their execution is not.
Also testers that prefer exploratory testing can use checklists or scripts if that better serves the need for information for the stakeholders. Although I think information transfer is better served by putting relevant detail in the reports and not in the test cases.