About Arborosa

Arborosa is a software tester, walking fanatic, father of two and husband living in Utrecht in the Netherlands. This blog is intended to display my thoughts and opinions on software testing, books, blogs, experiences and anything else that I find interesting enough to write and publish about.

Test Types – A

This sixth post in the series of software testing overviews introduces the first of some 80+ different test types. That number is completely arbitrary. While investigating test types I found over a hundred definitions but I have chosen to leave out a number of those ‘test types’. My choice to do so is based on the interpretation that these test types rather described a test level or a test technique and I could not see how to make a useful test type out of them.

So what is a test type?

To me a test type is a particular approach of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

While going through my overview you might find that some of the test types I mention do not fit the above description of a test type. The reason that they are mentioned despite of this is that I felt that they were so often mentioned as a test type that they should have a mention in this post just for that reason.

This post differs somewhat from the earlier posts as the definitions used are mostly my own and for some of them I have added comments as additional information. Also since a post with over 80 descriptions would be too long I will split up the overview into alphabetical sections. To begin with the A.

A/B testing
A/B testing originates from marketing research used to investigate the more efficient of two possible solutions by presenting them to two different groups and measuring the ‘profits’. In software testing it is mostly used to compare two versions of a program or website, often of which only one contains changes on a single or a few controllable criteria.

Acceptance testing
During acceptance testing requirements, variables, parts or behaviour of a program are compared against measurable aspects of predetermined acceptance criteria of those requirements variables, parts or behaviour of the program. This requires at least four things. First identification of the requirements, variables and parts or behaviour (coverage). Second expressing these in measurable aspects. Third the aspects need to represent the defining elements of the acceptance criteria. Finally the acceptance criteria themselves should represent the needs and wants of the stakeholder.

Active testing
Testing the program by triggering actions and events in the program and studying the results. To be honest I do not consider this a test type as in my opinion this describes nearly all types of testing.

Ad-hoc testing
Ad-hoc testing is software testing performed without prior planning or documentation on direction of the test, on how to test or on the oracles used. Some definitions see this as informal, unstructured and not reproducible. Informal and unstructured (if seen as unprepared) is certainly true as this would be the point of doing it ad-hoc. The not reproducible part depends on whether you care to record the test progress and test results. Something that is in my opinion is not inherently attached to doing something ad-hoc. 

Age testing
It is a testing technique that evaluates a system’s ability to perform in the future and usually carried out by test teams. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing. To be honest I found only one reference to this test type but I find the idea interesting. 

Agile testing
Agile testing is mentioned often as a test type or test approach but I have added no definition or description here. In my opinion agile testing is not a software test type. Agile testing rather is a particular context in which testing is performed that may have its particular challenges on test execution, on how tests are approached and on choices of test tooling but not a specific test type. 

Alpha testing
Alpha testing is an in-house (full) integration test of the near complete product that is executed by others than the development team but still is executed in a development environment. Alpha testing simulates the products intended use and helps catch design flaws and operational bugs. One could argue that this is more of a test level than a test type. I care to view it as test type because it is more about the type of use and its potential to discover new information than that is part of the software development itself. I specifically disagree with the idea that this is an intermediary step towards, or is part of, handing over software to a Test/QA group. In my opinion testing is integrated right from the start of development up until it stops because the product ends its lifecycle or due to some other stopping heuristic. 

API testing
API testing involves testing individual or combined inputs, outputs and business rules of the API under investigation. Essentially an API is a device-independent, or component-independent access provider that receives, interprets/transforms and sends messages so that different parts of a computer or programs can use each other’s operations and/or information. Testing an API is similar to testing in general albeit that an API has a smaller scope has, or should have, specific contracts and definitions that describe the API’s specific variables, value ranges and (business) rules. Testing an API should however not be limited to the API alone. Sources, destinations (end-points), web services (e.g. REST, SOAP), message types (e.g. JSON, XML), message formats (e.g. SWIFT, FIX, EDI, CSV), transport- (e.g. HTTP(S), JMS, MQ)  and communication protocols (e.g. TCP/IP, SMTP, MQTT, TIBCO Rendezvous) all influence the overall possibilities and functionality of the API in relation to the system(s) that use(s) the API. Typically API testing is semi- or fully automated and requires sufficient tool, message type, and transport- and communication protocol knowledge to be executed well.

Regression Testing

As a follow up in the testing definition series it was my intention to continue with covering Test Types. Initial investigation showed what I had already feared. Such a post would become a Herculean task and probable my longest post ever. So I will continue with that particular endeavor sometime later tackling it one step at a time. This post for starters covers one of the most common but also one of the most peculiar types of testing

“Regression Testing”

Regression Testing is so common as a testing type that the majority of books about software testing, and agile for that matter, that I know, mention regression testing. Almost as common however is that most of them either or both do not tell what regression testing is or do not tell how one should actually go about and do regression testing. To be fair an exception to the latter is that quite a few, particularly the ones with an agile demeanor, tell that regression testing is done by having automated tests but that is hardly anymore informative is it.

Before I go further into regression testing as being peculiar first inline with the previous posts a list of regression testing definitions:

  • Checking that what has been corrected still works. (Bertrand Meyer; Seven Principles of Software Testing 2008)
  • Regression testing involves reuse of the same tests, so you can retest (with these) after change. (Cem Kaner, James Bach, Bret Pettichord; Lessons learned in Software Testing 2002)
  • Regression testing is done to make sure that a fix does what it’s supposed to do (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)
  • Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. (John E. Bentley; Software Testing Fundamentals Concepts, Roles, and Terminology 2005)
  • Retesting to detect faults introduced by modification (ISO/IEC/IEEE 24765:2010)
  • Saving test cases and running them again after changes to other components of the program (Glenford J. Myers; The art of software testing 2nd Edition 2004)
  • Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (ISO/IEC/IEEE 24765:2010)
  • Testing following modifications to a test item or to its operational environment, to identify whether regression failures occur (ISO/IEC/IEEE 29119-1:2013)
  • Testing if what was tested before still works (Egbert Bouman; SmarTEST 2008)
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. (Standard glossary of terms used in Software Testing Version 2.2, 2012)
  • Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects (ISO/IEC 90003:2014)
  • Tests to make sure that the change didn’t disturb anything else. Test the overall integrity of the program. (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)

Looking at the above definitions the general idea about regression testing seems to be:

“To ensure that except for the parts of the areas* that were intentionally changed no other parts of these areas or other areas of the software are impacted by those changes and that these still function and behave as before”.
(*Area is used here as a general expression for function, feature, component, or any other dimensional divisions of the subject under test that is used)

The peculiar thing now is that however useful and logical such a definition is it only provides the intention of this type, or should I say activity, of testing. Regression testing could still encompass any other testing type in practice.

To know what to do you first need to establish which areas are knowingly affected by the changes and then which areas have the most likelihood of being unknowingly affected by the change. Next to that there probably are areas in your software where you do not want to take the risk of them being affected by the changes. In his presentation at EuroSTAR in 2005 Peter Zimmerer addresses the consequences of this in his test design poster by pointing out that the wider you throw out your net for regression effects the larger the effort will be:

  • Parts which have been changed – 1
  • Parts which are influenced by the change – 2
  • Risky, high priority, critical parts – 3
  • Parts which are often used – 4
  • All – 5

Once you have identified the areas you want to regression test you still need to figure out how to test those areas for the potential impact of the change. The general idea to solve this, in theory at least, seems to be to rerun previous tests that cover these areas. As this might mean running numerous tests for lengthy periods of time many books and articles propose to run automated tests. This will however only work if there are automated tests to use for testing these areas to begin with. And even if there are you still need to evaluate the results of any failed test and there is no clear indication of how long that may take.

How do you know that these existing tests do test for the impact of the change? After all they were not designed to do so. For all you know they might or might not fail due to changes to the area that is tested by them. Either result could therefore be right or wrong in light of the changes. The test itself could be influenced by an impact of the change on the test (positive or negative) that was not considered or identified yet.

All in all regression testing is easily considered to be necessary, not so easy to determine, difficult to evaluate on success and considerably more work then many people think. Even so next to writing new tests it probably is the best to solution to check if changes bring about unwanted functionality or behavior in your software. My suggestion to you is to at least change the test data so that these existing tests have a better chance of finding new bugs.

Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claim Testing

  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)

Pilot

  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

A collection of quality characteristics

Following the earlier posts listing software testing and bug definitions this post has also a, very large, listing. This time it is a list of quality characteristics. Like the earlier posts this list of common definitions reflects views on software testing. But unlike the earlier posts every item in itself is also a different way to look at your software and its use and a way to divide the way you test yourself. Therein lies a challenge for you as a reader.

Would you be able to create a test idea for each and every one of them?
Or at least for those that matter for your software?

Probably not but go ahead and try anyway
and if you really can’t come with a test idea try to think why not.
Is it not applicable to your application?
Is it not to applicable your context?
Do you not know how?

Would that mean you might miss some valuable information about the software?

A lot of them may not apply directly to your current context but it is good to browse over them and pick the ones that are useful now and revisit them to re-evaluate your choices later and pick the (different) ones that apply then.

Accessibility

  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25064:201) Note: [ISO 9241-171:2008] Although “accessibility” typically addresses users who have disabilities, the concept is not limited to disability issues. The range of capabilities includes disabilities associated with age. Accessibility for people with disabilities can be specified or measured either as the extent to which a product or system can be used by users with specified disabilities to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by the presence of product properties that support accessibility [ISO 25063:2014] Context of use includes direct use or use supported by assistive technologies.
  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Capable of being reached, capable of being used or seen. (IAIDQ – Martin Eppler)
  • The characteristic of being able to access data when it is required. (IAIDQ – Larry P. English)

Accountability

  • Degree to which the actions of an entity can be traced uniquely to the entity (ISO/IEC 25010:2011)

Accuracy

  • Degree of conformity of a measure to a standard or a true value. Level of precision or detail. Activation: a term that designates activities that make information more applicable and current, and its delivery and use more interactive and faster; a process that increases the usefulness of information by making it more vivid and organising it in a way that it can be used directly without further repackaging. (IAIDQ – Martin Eppler)
  • The capability of the software product to provide the right or agreed results or effects with the needed degree of precision (ISTQB Glossary 2015)

Accuracy to reality

  • A characteristic of information quality measuring the degree to which a data value (or set of data values) correctly represents the attributes of the real-world object or event. (IAIDQ – Larry P. English)

Accuracy to surrogate source

  • A measure of the degree to which data agrees with an original, acknowledged authoritative source of data about a real world object or event, such as a form, document, or unaltered electronic data received from outside the organisation. See also Accuracy. (IAIDQ – Larry P. English)

Adaptability

  • Degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments (ISO/IEC 25010:2011) Note: Adaptability includes the scalability of internal capacity, such as screen fields, tables, transaction volumes, and report formats. Adaptations include those carried out by specialized support staff, business or operational staff, or end users. If the system is to be adapted by the end user, adaptability corresponds to suitability for individualization as defined in ISO 9241-110. See also: flexibility
  • The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered (ISTQB Glossary 2015)

Analyzability

  • Degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of an intended change to one or more of its parts, or to diagnose a product for deficiencies or causes of failures, or to identify parts to be modified (ISO/IEC 25010:2011) Note: Implementation can include providing mechanisms for the product or system to analyze its own faults and provide reports before or after a failure or other event. Syn: analysability See also: modifiability
    The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified (ISTQB Glossary 2015)

Applicability

  • The characteristic of information to be directly useful for a given context, information that is organised for action. (IAIDQ – Martin Eppler)

Appropriateness recognizability

  • Degree to which users can recognize whether a product or system is appropriate for their needs (ISO/IEC 25010:2011)

Attractiveness

  • The capability of the software product to be attractive to the user (ISTQB Glossary 2015)

Authenticity

  • Degree to which the identity of a subject or resource can be proved to be the one claimed (ISO/IEC
    25010:2011)

Availability

  • Ability of a service or service component to perform its required function at an agreed instant or over an agreed period of time (ISO/IEC/IEEE 24765c:2014)
  • The degree to which a system or component is operational and accessible when required for use (ISO/IEC 25010:2011) Note: Availability is normally expressed as a ratio or percentage of the time that the service or service component is actually available for use by the customer to the agreed time that the service should be available. Availability is a combination of maturity (which reflects the frequency of failure), fault tolerance and recoverability (which reflect the length of downtime following each failure). See also: error tolerance, fault tolerance, reliability, robustness
  • A percentage measure of the reliability of a system indicating the percentage of time the system or data is accessible or usable, compared to the amount of time the system or data should be accessible or usable. (IAIDQ – Larry P. English)
  • The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. (ISTQB Glossary 2015)

Benchmark

  • Standard against which results can be measured or assessed (ISO/IEC 25010:2011)
  • Procedure, problem, or test that can be used to compare systems or components to each other or to a standard (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Reference point against which comparisons can be made (ISO/IEC 29155-1:2011)

Capability

  • Can the productperform valuable functions?
    • Completeness: all important functions wanted by end users are available.
    • Accuracy: any output or calculation in the product is correct and presented with significant digits.
    • Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
    • Interoperability: different features interact with each other in the best way.
    • Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
    • Data agnosticism: supports all possible data formats, and handles noise.
    • Extensibility: ability for customers or 3rd parties to add features or change behavior.

Capacity

  • Degree to which the maximum limits of a product or system parameter meet requirements (ISO/IEC 25010:2011) Note: Parameters can include the number of items that can be stored, the number of concurrent users, the communication bandwidth, throughput of transactions, and size of database.

Changeability

  • The capability of the software product to enable specified modifications to be implemented. (ISTQB Glossary 2015)

Charisma

  • Does the product have “it”?
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Satisfaction: how do you feel after using the product?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
    • Curiosity: will users get interested and try out what they can do with the product?
    • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
    • Hype: should the product use the latest and greatest technologies/ideas?
    • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
    • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
    • Directness: are (first) impressions impressive?
    • Story: are there compelling stories about the product’s inception, construction or usage? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Clarity

  • Void of obscure language or expression, ease of understanding, interpretability. (IAIDQ – Martin Eppler)

Co-existence

  • Degree to which a product can perform its required functions efficiently while
    environment and resources with other products, without detrimental impact on any other product (ISO/IEC 25010:2011) Syn: coexistence

Comfort

  • Degree to which the user is satisfied with physical comfort (ISO/IEC 25010:2011)

Compatibility

  • Degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment (ISO/IEC 25010:2011)
  • The ability of two or more systems or components to exchange information (ISO/IEC/IEEE 24765:2010)
  • The capability of a functional unit to meet the requirements of a specified interface without appreciable modification (ISO/IEC 2382-1:1993)
  • How well does the product interact with software and environments?
    • Hardware Compatibility: the product can be used with applicable configurations of hardware components.
    • Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
    • Application Compatibility: the product, and its data, works with other applications customers are likely to use.
    • Configuration Compatibility: product’s ability to blend in with configurations of the environment.
    • Backward Compatibility: can the product do everything the last version could?
    • Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
    • Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
    • Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Completeness

  • A characteristic of information quality measuring the degree to which all required data is known.
    • Fact completeness is a measure of data definition quality expressed as a percentage of the attributes about an entity type that need to be known to assure that they are defined in the model and implemented in a database. For example, “80 percent of the attributes required to be known about customers have fields in a database to store the attribute values.”
    • Value completeness is a measure of data content quality expressed as a percentage of the columns or fields of a table or file that should have values in them, in fact do so. For example, “95 percent of the columns for the customer table have a value in them.” Also referred to as Coverage.
    • Occurrence completeness is a measure of the percent of records in an information collection that it should have to represent all occurrences of the real world objects it should know. For example, does a Department of Corrections have a record for each Offender it is responsible to know about? (IQ). (IAIDQ – Larry P. English)

Complexity

  • The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. (ISTQB Glossary 2015)

Compliance

  • The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. (ISTQB Glossary 2015)

Component

  • An entity with discrete structure, such as an assembly or software module, within a system considered at a particular level of analysis (ISO/IEC 19770-2:2009)
  • One of the parts that make up a system (IEEE 1012-2012)(IEEE 829)
  • Object that encapsulates its own template, so that the template can be interrogated by interaction with the component (ISO/IEC 10746-2:2009)
  • Specific, named collection of features that can be described by an IDL component definition or a corresponding structure in an interface repository (ISO/IEC 19500-3:2012)
  • Functionally or logically distinct part of a system (ISO/IEC 19506:2012) Note: A component may be hardware or software and may be subdivided into other components. Component refers to a part of a whole, such as a component of a software product or a component of a software identification tag. The terms module, component, and unit are often used interchangeably or defined to be subelements of one another in different ways depending upon the context. The relationship of these terms is not yet standardized. A component may or may not be independently managed from the end-user or administrator’s point of view.

Comprehensiveness

  • The quality of information to cover a topic to a degree or scope that is satisfactory to the information user. (IAIDQ – Martin Eppler)

Conciseness

  • Marked by brevity of expression or statement, free from all elaboration and superfluous detail. (IAIDQ – Martin Eppler)

Concurrency

  • A characteristic of information quality measuring the degree to which the timing of equivalence of data is stored in redundant or distributed database files. The measure data concurrency may describe the minimum, maximum, and average information float time from when data is available in one data source and when it becomes available in another data source. Or it may consist of the relative percent of data from a data source that is propagated to the target within a specified time frame. (IAIDQ – Larry P. English)

Confidentiality

  • Degree to which a product or system ensures that data are accessible only to those authorized to have access (ISO/IEC 25010:2011)

Connectivity

  • The ease with which a link with a different information system or within the information system can be made and modified. (TMap Next)

Consistency

  • A measure of information quality expressed as the degree to which a set of data is equivalent in redundant or distributed databases. (IAIDQ – Larry P. English)
  • The condition of adhering together, the ability to be asserted together without contradiction. (IAIDQ – Martin Eppler)

Context completeness

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in all the specified contexts of use (ISO/IEC 25010:2011) Note: Context completeness can be specified or measured either as the degree to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in all the intended contexts of use, or by the presence of product properties that support use in all the intended contexts of use.

Context coverage

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in both specified contexts of use and in contexts beyond those initially explicitly identified (ISO/IEC 25010:2011) Note: Context of use is relevant to both quality in use and some product quality (sub) characteristics.

Continuity

  • The certainty that data processing will continue uninterruptedly, which means that it can be resumed within a reasonable period of time even after serious interruptions. (TMap Next)

Controlability

  • The ease with which the correctness and completeness of the information (in the course of time) can be checked. (TMap Next)

Convenience

  • The ease-of-use or seamlessness by which information is acquired. (IAIDQ – Martin Eppler)

Correctness

  • The functionality matches the specification. (McCall, 1977)
  • Conforming to an approved or conventional standard, conforming to or agreeing with fact, logic, or known truth. (IAIDQ – Martin Eppler)

Currency

  • A characteristic of information quality measuring the degree to which data represents reality from the required point in time. For example, one information view may require data currency to be the most up-to-date point, such as stock prices for stock trades, while another may require data to be the last stock price of the day, for stock price running average. (IAIDQ – Larry P. English)
  • The quality or state of information of being up-to-date or not outdated. (IAIDQ – Martin Eppler)

Data deficiency

  • An unconformity between the view of the real-world system that can be inferred from a representing information system and the view that can be obtained by directly observing the real-world system. (IAIDQ – Martin Eppler)

Database integrity

  • The characteristic of data in a database in which the data conforms to the physical integrity constraints, such as referential integrity and primary key uniqueness, and is able to be secured and recovered in the event of an application, software, or hardware failure. Database integrity does not imply data accuracy or other information quality characteristics not able to be provided by the DBMS functions. (IAIDQ – Larry P. English)

Degradation possibilities

  • The ease with which the core of the information system can continue after a part has failed. (TMap Next)

Ease-of-use

  • The quality of an information environment to facilitate the access and manipulation of information in a way that is intuitive. (IAIDQ – Martin Eppler)

Economic risk mitigation

  • Degree to which a product or system mitigates the potential risk to financial status, efficient operation, commercial property, reputation, or other resources in the intended contexts of use (ISO/IEC 25010:2011)

Effectiveness

  • The capability of producing an intended result. (ISTQB Glossary 2015)

Efficiency

  • System resource (including cpu, disk, memory, network) usage. (McCall, 1977)
  • Optimum use of system resources during correct execution. (Boehm, 1978)
  • A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
    • Time behavior; response times for a given thru put, i.e. transaction rate.
    • Resource behavior; resources used, i.e. memory, cpu, disk and network usage. (ISO-9126)
  • The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions. (ISTQB Glossary 2015)

Entity integrity

  • The assurance that a primary key value will identify no more than one occurrence of an entity type, and that no attribute of the primary key may contain a null value. Based on this premise, the real-world entities are uniquely distinguishable from all other entities. (IAIDQ – Larry P. English)

Environmental risk mitigation

  • Degree to which a product or system mitigates the potential risk to property or the environment in the intended contexts of use (ISO/IEC 25010:2011)

External measure of software quality

  • Measure of the degree to which a software product enables the behavior of a system under specified conditions to satisfy stated and implied needs for the system (ISO/IEC 25010:2011) Note: Attributes of the behavior can be verified or validated by executing the software product during testing and operation. See also: external software quality, internal measure of software quality

Extensibility

  • The ability to dynamically augment a database (or data dictionary) schema with knowledge worker-defined data types. This includes addition of new data types and class definitions for representation and manipulation of unconventional data such as text data, audio data, image data, and data associated with artificial intelligence applications. (IAIDQ – Larry P. English)

Fault tolerance

  • The ability of a system or component to continue normal operation despite the presence of Hardware or software faults (ISO/IEC 25010:2011)
  • Pertaining to the study of errors, faults, and failures, and of methods for enabling systems to continue normal operation in the presence of faults (ISO/IEC/IEEE 24765:2010) See also: error tolerance, fail safe, fail soft, fault secure, robustness

Flexibility

  • The ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in contexts beyond those initially specified in the requirements (ISO/IEC 25010:2011 ) Note: Flexibility enables products to take account of circumstances, opportunities and individual preferences that had not been anticipated in advance. If a product is not designed for flexibility, it might not be safe to use the product in unintended contexts. Flexibility can be measured either as the extent to which a product can be used by additional types of users to achieve additional types of goals with effectiveness, efficiency, freedom from risk and satisfaction in additional types of contexts of use, or by a capability to be modified to support adaptation for new types of users, tasks and environments, and suitability for individualization. See also: adaptability, extendibility, maintainability
  • The ability to make changes required as dictated by the business. (McCall, 1977)
  • The ease of changing the software to meet revised requirements. (Boehm 1978)
  • A characteristic of information quality measuring the degree to which the information architecture or database is able to support organisational or process reengineering changes with minimal modification of the existing objects and relationships, only adding new objects and relationships. (IAIDQ – Larry P. English)
  • The degree to which the user may introduce extensions or modifications to the information system without changing the software itself. (TMap Next)

Freedom from risk

  • Degree to which a product or system mitigates the potential risk to economic status, human life, health, or the environment (ISO/IEC 25010:2011)

Functional appropriateness

  • Degree to which the functions facilitate the accomplishment of specified tasks and objectives (ISO/IEC 25010:2011) Note: Functional appropriateness corresponds to suitability for the task.

Functional completeness

  • Degree to which the set of functions covers all the specified tasks and user objectives (ISO/IEC 25010:2011)

Functional correctness

  • Degree to which a product or system provides the correct results with the needed degree of precision (ISO/IEC 25010:2011)

Functional suitability

  • Degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions (ISO/IEC 25010:2011) Note: Functional Suitability is only concerned with whether the functions meet stated and implied needs, not the functional specification.

Functionality

  • A set of attributes that bear onthe existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
    • Suitability; the appropriateness (to specification) of the functions of the software.
    • Accuratness; the correctness of the functions
    • Interoperatebility; the ability of a software component to interact with other components or systems.
    • Compliance; the compliant capability of software.
    • Security; unauthorized access to the software functions. (ISO-9126)
  • Functionality
    • The value added purpose of the product. Also…
    • Connectivity – protocols (e.g. Bluetooth), or re-sync of offline clients
    • Interoperability – inter-app platform and language independence
    • Extensibility, Expandability – plugins, late binding
    • Composability – service or message oriented considerations, governance
    • Manageability – administration of fielded product
    • Licensing (FURPS+)

Health and safety risk mitigation

  • Degree to which a product or system mitigates the potential risk to people in the intended contexts of use (ISO/IEC 25010:2011)

Information quality

  • Consistently meeting all knowledge worker and end-customer expectations in all quality characteristics of the information products and services required to accomplish the enterprise mission (internal knowledge worker) or personal objectives (end customer). (IAIDQ – Larry P. English)
  • The degree to which information consistently meets the requirements and expectations of all knowledge workers who require it to perform their processes. (IAIDQ – Larry P. English)
  • The fitness for use of information; information that meets the requirements of its authors, users, and administrators. (IAIDQ – Martin Eppler)

Immunity

  • Degree to which a product or system is resistant to attack (ISO/IEC 25010:2011) See also: integrity

Indirect user

  • Person who receives output from a system, but does not interact with the system (ISO/IEC 25010:2011) See also: direct user, secondary user

Install ability

  • Degree of effectiveness and efficiency with which a product or system can be successfully installed or uninstalled in a specified environment (ISO/IEC 25010:2011)
  • The capability of the software product to be installed in a specified environment. (ISTQB Glossary 2015)

Integrity

  • Degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data (ISO/IEC 25010:2011) See also: immunity
  • Protection from unauthorized access. (McCall, 1977)

Interactivity

  • The capacity of an information system to react to the inputs of information consumers, to generate instant, tailored responses to a user’s actions or inquiries. Interpretation : the process of assigning meaning to a constructed representation of an object or event. (IAIDQ – Martin Eppler)

Internal measure of software quality

  • Measure of the degree to which a set of static attributes of a software product satisfies stated and implied needs for the software product to be used under specified conditions (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: Static attributes include those that relate to the software architecture, structure and its components. Static attributes can be verified by review, inspection, simulation, or automated tools. See also: external measure of software quality

Interoperability

  • Degree to which two or more systems, products or components can exchange information and use the information that has been exchanged (ISO/IEC 25010:2011)
  • The ability for two or more ORBs to cooperate to deliver requests to the proper object (ISO/IEC 19500-2:2012)
  • The capability to communicate, execute programs, and transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units. (ISO/IEC 2382-1:1993)
  • Capability of objects to collaborate, that is, the capability mutually to communicate information in order to exchange events, proposals, requests, results, commitments and flows (ISO/IEC 10746-2:2009) Note: Interoperability is used in place of compatibility in order to avoid possible ambiguity with replace ability. See also: compatibility
  • The extent, or ease, to which software components work together. (McCall, 1977)
  • The capability of the software product to interact with one or more specified components or systems. (ISTQB Glossary 2015)

IT-bility

  • Is the product easy to install,maintain and support?
    • System requirements: ability to run on supported configurations, and handle different environments or missing components.
    • Installability: product can be installed on intended platforms with appropriate footprint.
    • Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
    • Uninstallation: are all files (except user’s or system files) and other resources removed when uninstalling?
    • Configuration: can the installation be configured in various ways or places to support customer’s usage?
    • Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
    • Maintainability: are the product and its artifacts easy to maintain and support for customers?
    • Testability: how effectively can the deployed product be tested by the customer? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Learnability

  • Degree to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Can be specified or measured either as the extent to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by product properties corresponding to suitability for learning as defined in ISO 9241-110.
  • The quality of information to be easily transformed into knowledge. (IAIDQ – Martin Eppler)
  • The capability of the software product to enable the user to learn its application. (ISTQB Glossary 2015)

Maintainability

  • Ease with which a software system or component can be modified to change or add capabilities, correct faults or defects, improve performance or other attributes, or adapt to a changed environment (ISO/IEC/IEEE 24765:2010)
  • Ease with which a hardware system or component can be retained in, or restored to, a state in which it can perform its required functions (ISO/IEC/IEEE 24765:2010)
  • Capability of the software product to be modified (IEEE 14764-2006)
  • Average effort required to locate and fix a software failure (ISO/IEC/IEEE 24765:2010)
  • Speed and ease with which a program can be corrected or changed (IEEE 982.1-2005)
  • Degree of effectiveness and efficiency with which a product or system can be modified by the intended maintainers (ISO/IEC 25010:2011) Note: Maintainability includes installation of updates and upgrades. Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications. Modifications include those carried out by specialized support staff, and those carried out by business or operational staff, or end users. See also: extendability, flexibility
  • Can the product be maintained and extended atlow cost?
    • Flexibility: the ability to change the product as required by customers.
    • Extensibility: will it be easy to add features in the future?
    • Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
    • Readability: the code is adequately documented and easy to read and understand.
    • Transparency: Is it easy to understand the underlying structures?
    • Modularity: the code is split into manageable pieces.
    • Refactorability: are you satisfied with the unit tests?
    • Analyzability: ability to find causes for defects or other code of interest.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to find and fix a defect (McCall, 1977)
  • The characteristic of an information environment to be manageable at reasonable costs in terms of content volume, frequency, quality, and infrastructure. If a system is maintainable, information can be added, deleted, or changed efficiently. (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed to make specifiedmodifications.
    • Analyzability; the ability to identify the root cause of a failure within the software.
    • Changeability; the sensitivity to change of a given system that is the negative impact that may be caused by system changes.
    • Testability; the effort needed to verify (test) a system change. (ISO-9126)
  • The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. (ISTQB Glossary 2015)
  • The ease of adapting the information system to new demands from the user, to changing external environments, or in order to correct defects. (TMap Next)

Manageability

  • The ease with which to get and keep the information system in its operational state. (TMap Next)

Maturity

  • Degree to which a system, product or component meets needs for reliability under normal operation (ISO/IEC 25010:2011) Note: The concept of maturity can be applied to quality characteristics to indicate the degree to which they meet required needs under normal operation.
  • The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (ISTQB Glossary 2015)
  • The capability of the software product to avoid failure as a result of defects in the software. (ISTQB Glossary 2015)

Modifiability

  • Ease with which a system can be changed without introducing defects (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be effectively and efficiently modified without introducing defects or degrading existing product quality (ISO/IEC 25010:2011) see also: analyzability, maintainability, and modularity

Modularity

  • Degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components (ISO/IEC 25010:2011)
  • Software attributes that provide a structure of highly independent components (ISO/IEC/IEEE 24765:2010) See also: cohesion, coupling, and modifiability

Non-repudiation

  • Degree to which actions or events can be proven to have taken place, so that the events or actions cannot be repudiated later (ISO/IEC 25010:2011)
  • The ability to provide proof of transmission and receipt of electronic communication. (IAIDQ – Larry P. English)

Operability

  • Degree to which a product or system has attributes that make it easy to operate and control (ISO/IEC 25010:2011) Note: Operability corresponds to controllability, (operator) error tolerance, and conformity with user expectations as defined in ISO 9241-110.

Operational reliability

  • The degree to which the information system remains free from interruptions. (TMap Next)

Performance

  • Is the product fast enough?
    • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
    • Resource Utilization: appropriate usage of memory, storage and other resources.
    • Responsiveness: the speed of which an action is (perceived as) performed.
    • Availability: the system is available for use when it should be.
    • Throughput: the products ability to process many, many things.
    • Endurance: can the product handle load for a long time?
    • Feedback: is the feedback from the system on user actions appropriate?
    • Scalability: how well does the product scale up, out or down? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. (ISTQB Glossary 2015)

Performance efficiency

  • Performance relative to the amount of resources used under stated conditions
    (ISO/IEC 25010:2011) Note: Resources can include other software products, the software and hardware configuration of the system, and materials (e.g. print paper, storage media).

Pleasure

  • Degree to which a user obtains pleasure from fulfilling personal needs (ISO/IEC 25010:2011) Note: Personal needs can include needs to acquire new knowledge and skills, to communicate personal identity and to provoke pleasant memories.

Portability

  • Ease with which a system or component can be transferred from one hardware or software environment to another (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) (2) capability of a program to be executed on various types of data processing systems without converting the program to a different language and with little or no modification (ISO/IEC 2382-1:1993)
  • Degree of effectiveness and efficiency with which a system, product, or component can be transferred from one hardware, software or other operational or usage environment to another (ISO/IEC 25010:2011)
  • Property that the reference points of an object allow it to be adapted to a variety of configurations (ISO/IEC 10746-2:2009) Syn: transportability See also: machine-independent
  • Is transferring of the product to different environments enabled?
    • Reusability: can parts of the product be re-used elsewhere?
    • Adaptability: is it easy to change the product to support a different environment?
    • Compatibility: does the product comply with common interfaces or official standards?
    • Internationalization: it is easy to translate the product.
    • Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
    • User Interface-robustness: will the product look equally good when translated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to transfer the software from one environment to another. (McCall, 1977)
  • The extent to which the software will work under different computer configurations (i.e. operating systems, databases etc.). (Boehm, 1978)
  • A set of attributes that bear onthe ability of software to be transferred from one environment to another.
    • Adaptability; the ability of the system to change to new specifications or operating environments.
    • Installability; the effort required to install the software.
    • Conformance
    • Replaceability; how easy is it to exchange a given software component within a specified environment. (ISO-9126)
  • The ease with which the software product can be transferred from one hardware or software environment to another. (ISTQB Glossary 2015)
  • The diversity of the hardware and software platforms on which the information system can run, and how easy it is to transfer the system from one environment to another. (TMap Next)

Possibility of diversion

  • The ease with which (part of) the information system can continue elsewhere. (TMap Next)

Quality

  • Degree to which a system, component, or process meets specified requirements (IEEE 829-2008)
  • Ability of a product, service, system, component, or process to meet customer or user needs, expectations, or requirements (ISO/IEC/IEEE 24765:2010)
  • Degree to which the system satisfies the stated and implied needs of its various stakeholders, and thus provides value (ISO/IEC 25010:2011)
  • Degree to which a system, component, or process meets customer or user needs or expectations (IEEE 829-2008)
  • The degree to which a set of inherent characteristics fulfills requirements (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)

Quality in use (measure)

  • Extent to which a product used by specific users meets their needs to achieve specific goals with effectiveness, productivity, safety and satisfaction in specific contexts of use (ISO/IEC 25000:2014)
  • Degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: This definition of quality in use is similar to the definition of usability in ISO 9241-11. Before the product is released, quality in use can be specified and measured in a test environment designed and used exclusively by the intended users for their goals and contexts of use, e.g. User Acceptance Testing Environment. See also: usability

Quality measure

  • Measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25010:2011)
  • Derived measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25021:2012) Syn: QM See also: software quality measure

Quality measure element (QME)

  • Measure defined in terms of a property and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25000:2014) (ISO/IEC 25021:2012)
  • Measure defined in terms of an attribute and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25010:2011) Note: The software quality characteristics or sub characteristics of the entity are derived afterwards by calculating a software quality measure.

Quality property

  • Measurable component of quality (ISO/IEC 25010:2011)

Recoverability

  • Degree to which, in the event of an interruption or a failure, a product or system can recover the data directly affected and re-establish the desired state of the system (ISO/IEC 25010:2011) See also: survivability
  • The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. (ISTQB Glossary 2015)
  • The ease and speed with which the information system can be restored after an interruption. (TMap Next)

Reliability

  • The ability of a system or component to perform its required functions under stated conditions for a specified period of time (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time (ISO/IEC 25010:2011) Note: Dependability characteristics include availability and its inherent or external influencing factors, such as availability, reliability (including fault tolerance and ecoverability), security (including confidentiality and integrity), maintainability, durability, and maintenance support. Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation, or due to contextual changes. See also: availability, MTBF
  • Can you trust the product in many and difficult situations?
    • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
    • Robustness: the product handles foreseen and unforeseen errors gracefully.
    • Stress handling: how does the system cope when exceeding various limits?
    • Recoverability: it is possible to recover and continue using the product after a fatal error.
    • Data Integrity: all types of data remain intact throughout the product.
    • Safety: the product will not be part of damaging people or possessions.
    • Disaster Recovery: what if something really, really bad happens?
    • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The extent to which the system fails. (McCall, 1977)
  • The extent to which the software performs as required, i.e. the absence of defects. (Boehm, 1978)
  • A set of attributes that bear onthe capability of software tomaintain its level of performance under stated conditions for a statedperiod of time.
    • Maturity; the frequency of failure of the software.
    • Fault tolerance; the ability of software to withstand (and recover) from component, or environmental, failure.
    • Recoverability; ability to bring back a failed system to full operation, including data and network connections. (ISO-9126)
    • Reliability
    • Accuracy – the correctness of output
    • Availability – mean time between failures
    • Recoverability – from partial system failures
    • Verifiability – (contractual) runtime reporting on system health
    • Survivability – continuous operations through disasters (earthquake, war, etc.) (FURPS+)
  • The characteristic of an information infrastructure to store and retrieve information in an accessible, secure, maintainable, and fast manner. (IAIDQ – Martin Eppler)
  • The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. (ISTQB Glossary 2015)

Replace ability

  • Degree to which a product can replace another specified software product for the same purpose in the same environment (ISO/IEC 25010:2011) Note: Replace ability of a new version of a software product is important to the user when upgrading. Replace ability will reduce lock-in risk, so that other software products can be used in place of the present one, See also: adaptability, install ability
  • The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. (ISTQB Glossary 2015)

Resource 

  • Degree to which the amounts and types of resources used by a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011) Note: Human resources are included as part of efficiency. See also: efficiency

Reusability

  • Degree to which an asset can be used in more than one system, or in building other assets (IEEE 1517-2010)
  • In a reuse library, the characteristics of an asset that make it easy to use in different contexts, software systems, or in building different assets (IEEE 1517-2010) See also: generality
  • The ease of using existing software components in a different context. (McCall, 1977)
  • The degree to which parts of the information system, or the design, can be reused for the development of different applications. (TMap Next)

Risk

  • An uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)
  • Combination of the probability of an abnormal event or failure and the consequence(s) of that event or failure to a system’s components, operators, users, or environment. (IEEE 1012-2012)
  • Combination of the probability of an event and its consequence (ISO/IEC 16085:2006)
  • Measure that combines both the likelihood that a system hazard will cause an accident and the severity of that accident. (IEEE 1228-1994 (R2002))
  • Function of the probability of occurrence of a given threat and the potential adverse consequences of that threat’s occurrence (ISO/IEC 25010:2011)
  • Combination of the probability of occurrence and the consequences of a given future undesirable event (IEEE 1012-2012) Note: See ISO/IEC Guide 51 for issues related to safety.

Robustness

  • The degree to which the information system proceeds as usual even after an interruption. (TMap Next)

Satisfaction

  • Freedom from discomfort and positive attitudes towards the use of the product (ISO/IEC 25062:2006)
  • User’s subjective response when using the product (ISO/IEC 26513:2009)
  • Degree to which user needs are satisfied when a product or system is used in a specified context of use (ISO/IEC 25010:2011)

Scalability

  • The capability of the software product to be upgraded to accommodate increased loads. (ISTQB Glossary 2015)

Security

  • Protection of information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them (ISO/IEC 12207:2008)
  • The protection of computer hardware or software from accidental or malicious access, use, modification, destruction, or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations. (IEEE 1012-2012)
  • All aspects related to defining, achieving, and maintaining confidentiality, integrity, availability, non-repudiation, accountability, authenticity, and reliability of a system (ISO/IEC 15288:2008)
  • Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization (ISO/IEC 25010:2011) Note: Security also pertains to personnel, data, communications, and the physical protection of computer installations.
  • Does the product protect against unwanted usage?
    • Authentication: the product’s identifications of the users.
    • Authorization: the product’s handling of what an authenticated user can see and do.
    • Privacy: ability to not disclose data that is protected to unauthorized users.
    • Security holes: product should not invite to social engineering vulnerabilities.
    • Secrecy: the product should under no circumstances disclose information about the underlying systems.
    • Invulnerability: ability to withstand penetration attempts.
    • Virus-free: product will not transport virus, or appear as one.
    • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
    • Compliance: security standards the product adheres to. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Security: Confidentiality Preservation, Access Control, Non-repudiation (Integrity Verification, Authenticity Verification – PKI), Identity Verification (logon paradigm), Availability of Service, Auditing Evidence. (FURPS+)
  • Testing to determine the security of the software product. (ISTQB Glossary 2015)
  • The certainty that data can be viewed and changed only by those who are authorized to do so. (TMap Next)

Software quality

  • Capability of a software product to satisfy stated and implied needs when used under specified conditions (ISO/IEC 25000:2014)
  • Degree to which a software product satisfies stated and implied needs when used under specified conditions (ISO/IEC 25010:2011)
  • Degree to which a software product meets established requirements (IEEE 730-2014) Note: Quality depends upon the degree to which the established requirements accurately represent stakeholder needs, wants, and expectations. This definition differs from the ISO 9000:2000 quality definition mainly because the software quality definition refers to the satisfaction of stated and implied needs, while the ISO 9000 quality definition refers to the satisfaction of requirements. In SQuaRE standards software quality has the same meaning as software product quality.

Software quality requirement

  • Requirement that a software quality attribute be present in software (ISO/IEC 25010:2011)

Stability

  • The capability of the software product to avoid unexpected effects from modifications in the software. (ISTQB Glossary 2015)

Suitability

  • The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. (ISTQB Glossary 2015)

Suitability of infrastructure

  • The suitability of hardware, network, systems software and DBMS for the application concerned and the degree to which the elements of this infrastructure interrelate. (TMap Next)

Supportability

  • Supportability
    • Maintainability (i.e. “build-time” issues)
      • Testability – at unit, integration, and system levels
      • Buildability – fast build times, versioning robustness
      • Portability – minimal vendor or platform dependency
      • Reusability – of components
      • Brandability – OEM and partner support
      • Internationalization – prep for localization
    • Serviceability (i.e. “run-time” issues)
      • Continuity – administrative downtime constraints
      • Configurability/Modifiability – of fielded product
      • Installability, Updateability – ensuring application integrity
      • Deployability – mode of distributing updates
      • Restorability – from archives
      • Logging – of event or debug data (FURPS+)
  • Can customers’ usage and problems be supported?
    • Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
    • Diagnostics: is it possible to find out details regarding customer situations?
    • Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
    • Debugging: can you observe the internal states of the software when needed?
    • Versatility: ability to use the product in more ways than it was originally designed for. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Survivability

  • Degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks (ISO/IEC 25010:2011) See also: recoverability

Testability

  • Extent to which an objective and feasible test can be designed to determine whether a requirement is met (ISO/IEC 12207:2008)
  • Degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met (IEEE 1233-1998 (R2002))
  • Degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met (ISO/IEC/IEEE 24765:2010)
  • Degree of effectiveness and efficiency with which test criteria can be established for a system, product, or component and tests can be performed to determine whether those criteria have been met (ISO/IEC 25010:2011)
  • Is it easy to check and test the product?
    • Traceability: the product logs actions at appropriate levels and in usable format.
    • Controllability: ability to independently set states, objects or variables.
    • Observability: ability to observe things that should be tested.
    • Monitorability: can the product give hints on what/how it is doing?
    • Isolateability: ability to test a part by itself.
    • Stability: changes to the software are controlled, and not too frequent.
    • Automation: are there public or hidden programmatic interface that can be used?
    • Information: ability for testers to learn what needs to be learned…
    • Auditability: can the product and its creation be validated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to Validate the software requirements. (McCall, 1977)
  • Ease of validation, that the software meets the requirements. (Boehm, 1978)
  • The ease with which the functionality and performance level of the system (after each modification) can be tested and how fast this can be done. (TMap Next)

Time behavior

  • Degree to which the response and processing times and throughput rates of a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011)

Timeliness

  • A characteristic of information quality measuring the degree to which data is available when knowledge workers or processes require it. (IAIDQ – Larry P. English)
  • Coming early or at the right, appropriate or adapted to the times or the occasion. (IAIDQ – Martin Eppler)

Traceability

  • The ability to identify related items in documentation and software, such as requirements with associated tests. (ISTQB Glossary 2015)

Trust

  • Degree to which a user or other stakeholder has confidence that a product or system will behave as intended (ISO/IEC 25010:2011)

Understandability

  • The extent to which the software is easily comprehended with regard to purpose and structure. (Boehm, 1978)
  • The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. (ISTQB Glossary 2015)

Usability

  • Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25064:2013)
  • Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Usability can either be specified or measured as a product quality characteristic in terms of its sub characteristics, or specified or measured directly by measures that are a subset of quality in use. See also: reusability
  • Usability. Is the product easy to use?
    • Affordance: product invites to discover possibilities of the product.
    • Intuitiveness: it is easy to understand and explain what the product can do.
    • Minimalism: there is nothing redundant about the product’s content or appearance.
    • Learnability: it is fast and easy to learn how to use the product.
    • Memorability: once you have learnt how to do something you don’t forget it.
    • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
    • Operability: an experienced user can perform common actions very fast.
    • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
    • Control: the user should feel in control over the proceedings of the software.
    • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
    • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
    • Consistency: behavior is the same throughout the product, and there is one look & feel.
    • Tailorability: default settings and behavior can be specified for flexibility.
    • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
    • Documentation: there is a Help that helps, and matches the functionality.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Ease of use. (McCall, 1977) (Boehm, 1978)
  • Usability
    • Ergonomics – human factors engineering
    • Look and Feel – along with branding instancing
    • Accessibility – special needs accommodation
    • Localization – adding language resources
    • Documentation (FURPS+)
  • The characteristic of an information environment to be user-friendly in all its aspects (easy to learn, use, and remember). (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
    • Understandability; the ease of which the systems functions can be understood
    • Learnability; learning effort for different users, i.e. novice, expert, casual etc.
    • Operability; ability of the software to be easily operated by a given user in a given environment.
    • Attractiveness; (ISO-9126)
  • The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. (ISTQB Glossary 2015)

Usefulness

  • Degree to which a user is satisfied with perceived achievement of pragmatic goals, including the results of use and the consequences of use (ISO/IEC 25010:2011)
  • The quality of having utility and especially practical worth or applicability. (IAIDQ – Martin Eppler)

User

  • Individual or organization that uses the system or software to perform a specific function (ISO/IEC 25000:2014)
  • Person who interacts with a system, product or service (ISO/IEC 25064:2013)
  • Individual or organization who uses a software-intensive system in daily work activities or recreational pursuits (IEEE 1362-1998 (R2007))
  • The person (or persons) who operates or interacts directly with a software intensive system
  • Individual or group that benefits from a system during its utilization (ISO/IEC 15288:2008) (ISO/IEC 15939:2007)
  • Any person or thing that communicates or interacts with the software at any time (ISO/IEC 19761:2011) (ISO/IEC 20926:2009) (ISO/IEC 14143-1:2007)
  • Person (or instance) who uses the functions of a CBSS via a terminal (or an equivalent machine-user-interface) by submitting tasks and receiving the computed results (ISO/IEC 14756:1999)
  • Person who derives engineering value through interaction with a CASE tool (IEEE 1175.2-2006)
  • Individual or group that interacts with a system or benefits from a system during its utilization (ISO/IEC 25010:2011)
  • Individual or group that benefits from a ready to use software product during its utilization (ISO/IEC 25051:2014)
  • Person who performs one or more tasks with software; a member of a specific audience (ISO/IEC 26514:2008) Note: The user may perform other roles such as acquirer or maintainer. The role of user and the role of operator may be vested, simultaneously or sequentially, in the same individual or organization. [ISO 25063:2014] A person who uses the output or service provided by a system. For example, a bank customer who visits a branch, receives a paper statement, or carries out telephone banking using a call centre can be considered a user. See also: developer, end user, functional user, indirect user, operator, secondary user

User error protection

  • Degree to which a system protects users against making errors (ISO/IEC 25010:2011)

User-friendliness

  • The ease with which end-users use the system. (TMap Next)

User interface aesthetics

  • Degree to which a user interface enables pleasing and satisfying interaction for the user (ISO/IEC 25010:2011) Note: refers to properties of the product or system that increase the pleasure and satisfaction of the user, such as the use of color and the nature of the graphical design

Utility

  • The usefulness of information to its intended consumers, including the public. (OMB 515) (IAIDQ – Larry P. English)

Validity

  • A characteristic of information quality measuring the degree to which the data conforms to defined business rules. Validity is not synonymous with accuracy, which means the values are the correct values. A value may be a valid value, but still be incorrect. For example, a customer date of first service can be a valid date (within the correct range) and yet not be an accurate date. (IAIDQ – Larry P. English)

Original copyright messages included that are applicable to all definitions with a ISO/IEC 25010:yyyy reference:

IEEE ComputerSociety 12 Software and Systems Engineering Vocabulary

This definition is copyrighted , 2012 by the IEEE. The reader is granted permission to copy the definition as long as the statement
” Copyright, 2012, IEEE. Used by permission.” remains with the definition. All other rights are reserved. Copyright 2012 ISO/IEC.

In accordance with ISO/IEC JTC 1/SC 7 N2882 and N2930, this definition is made publicly available. Permission is granted to copy the definition providing that its source is cited.

Material reprinted with permission from Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK) Guide – Fourth Edition, 2008. Copyright and all rights reserved. PMI is a service and trademark of the Project Management Institute, Inc. which is registered in the United States and other nations. PMBOK is a trademark of the Project Management Institute, Inc. which is registered in the United States and other nations.

It’s all about bugs, right? So what are they?

The previous blog post had a number of definitions of software testing in it that focus on testing having bug finding as a purpose. As such this is not so strange since many people see finding or rather calling out bugs is the most visible part of software testing. Whether you agree with this or not one thing to me seems certain bugs are an important subject in software testing. This post then is all about bugs or rather it is about, the many definitions of (software) bugs.

  • “This thing gives out and then that. ‘Bug’—as such little faults and difficulties are called—show themselves, and months of anxious watching, study, and labor are requisite before commercial success—or failure—is certainly reached.” Edison letter to Pushkas [Edison, 1878]
  • “A software error is present when the program does not do what its end user reasonably expects it to do” Myers, 1976, p. 6 The art of software testing [1976]
  • “There can never be an absolute definition for bugs, nor an absolute determination of their existence. The extent to which a program has bugs is measured by the extent to which it fails to be useful. This is a fundamentally human measure” Beizer, p. 12). Software Systems and Quality Assurance [1984]
  • Fault: “An incorrect step, process, or data definition in a computer program. Note: This definition is used primarily by the fault tolerance discipline. In common usage, the terms ‘error’ and ‘bug’ are used to express this meaning.”
    Failure: “The inability of a system or component to perform its required functions within specified performance requirements. Note: The fault tolerance discipline distinguishes between a human action (a mistake), its manifestation (a hardware or software fault), the result of the fault ( a failure), and the amount by which the result is incorrect.” IEEE Standard 610.12-1990 [1990]
  • “A mismatch between the program and its specification is an error in the program if and only if the specification exists and is correct.” Kaner, Falk and Nguyen, Testing Computer Software [1993]
  • “Defect. Any state of unfitness for use, or nonconformance to specification.
    Error. (1) The difference between a computed, observed, or measured value and the true, specified, or theoretically correct value or condition. (2) An incorrect step, process, or data definition. Often called a bug. (3) An incorrect result. (4) A human action that produces an incorrect result. Note: One distinction assigns definition (1) to error, definition (2) to fault, definition (3) to failure, and definition (4) to mistake. Fault. An incorrect step, process, or data definition in a computer program. See also: error. Failure. Discrepancy between the external results of a program’s operation and the software product requirements. A software failure is evidence of the existence of a fault in the software.” Software Error Analysis, NIST [1993]
  • “A software defect is a “material breach” of the contract for sale or license of the software if it is so serious that the customer can justifiably demand a fix or can cancel the contract, return the software, and demand a refund.” Kaner, Software QA Magazine  [1996]
  • “The difference between the actual test result and the expected result can indicate a software product defect” Koomen and Pol, Test Process Improvement [1999]
  • “An observed difference between an expectation (prediction) and its actual outcome. A finding can be made on different objects, such as the test base (upon intake), while testing the system, on the test infrastructure, etc.” (translation) Pol, Teunissen, van Veenendaal, Testen volgens TMap 2e druk [2000]
  • “An error is clearly present if a program does not do what it is supposed to do, but errors are also present if a program does what it is not supposed to do.” Meyer, The art of software testing (2nd edition) [2004]
  • “A generic term that can refer to either a fault (cause) or a failure (effect). (IEEE 982.1-2005 IEEE Standard Dictionary of Measures of the Software Aspects of Dependability, 2.1) Example: (1) omissions and imperfections found during early life cycle phases and (2) faults contained in software sufficiently mature for test or operation” [2005]
  • “A bug, also referred to as a software bug, is an error or flaw in a computer program that may prevent it from working correctly or produce an incorrect or unintended result.” The Linux Information Project [2005]
  • “A defect (fault) is the result of an error residing in the code or document.”Broekman, van der Aalst, Vroon and Koomen, TMap Next for result driven testing [2006]
  • “A bug is anything about the program that threatens its value.” Bach and Bolton, Rapid Software Testing [2007]
  • Anything that causes an unnecessary or unreasonable reduction of the quality of a software product. BBST Bug Advocacy [2008]
  • “Defects are observations that the test results deviate from expectations” Bouman, SmarTEST [2008]
  • “A finding is an observed difference between an expectation or prediction and the actual outcome” Van der Aalst, Baarda, Roodenrijs, Vink and Visser, TMap NEXT Business Driven Test Management [2008]
  • “something that went wrong,” Alan Page. Ken Johnston. Bj Rollison, How we test software at Microsoft [2009]
  • “Situation that may cause errors to occur in an object” (ISO/IEC 10746-2:2009 Information technology — Open Distributed Processing — Reference Model: Foundations, 13.6.3) [2009]
  • Imperfection or deficiency in a work product where that work product does not meet its requirements or specifications and needs to be either repaired or replaced (IEEE 1044-2009 IEEE Standard Classification for Software Anomalies, 2) [2009]
  • An attribute of a software product that reduces its value to a favored stakeholder or increases its value to a disfavored stakeholder without a sufficiently large countervailing benefit. BBST Foundations [2010]
  • A software error is the failure to comply with an assured characteristic (translation)
    Ein Softwarefehler ist die Nichterfuellung einer zugesicherten Eigenschaft (eigenschaftsbezogene Fehlerdefinition). Basiswissen Software Testen [Spillner 2010]
  • A product has an error if it does not provide the security that when, taking into account all circumstances, in particular its performance, it can be used as reasonably can be legitimately expected, at the timing in which it was placed on the market. (translation) Ein produkt hat einen Fehler, wenn es nicht die Sicherheit bietet, die unter Berücksichtigung aller Umstande, ins besondere seiner Darbietung, des Gebrauchs, mit dem billigerweise gerechnet werden kann, des Zeitpunktes, in dem es in den Verkehr gebracht wurde, berechtigterweise erwartet werden kann [BGB01, ProdukthaftungsG. par. 3] (risikobesogene Fehlerdefinition). Basiswissen Software Testen [Spillner 2010]
  • Defect in a hardware device or component (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) [2010]
  • Manifestation of an error in software (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) [2010]
  • A Software Defect / Bug is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable). Software Testing Fundamentals [2010]
  • Incorrect step, process, or data definition in a computer program (ISO/IEC 25040:2011 Systems and software engineering–Systems and software Quality Requirements and Evaluation (SQuaRE)–Evaluation process, 4.27) [2011]
  • Defect in a system or a representation of a system that if executed/activated could potentially result in an error (ISO/IEC 15026-1:2013 Systems and software engineering–Systems and software assurance–Part 1: Concepts and vocabulary, 3.4.5) [2013]
  • Software bugs can originate from every step in translation from natural language, when:
    • the Syntax-rules were not followed
    • the translation falsified the Semantics, or
    • if the Pragmatics are unclear
      Software-Fehlertoleranz und Zuverlässigkeit [2013]
  • A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. Wikipedia [2015]
  • “the people who wrote the program told the computer to do something it couldn’t do”. Practical Programming [Gries, Campbell, Montojo 2015]
  • An imperfection or deficiency in a project component where that component does not meet its requirements or specifications and needs to be either repaired or replaced (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)
  • A software bug is a problem causing a program to crash or produce invalid output. Techopedia

What is testing?

Over time the definitions of testing and the ideas behind it have changed.
This post brings together an expanding collection of testing definitions in chronological order and thus provide an overview on changing views. Views that are based on the changes in computer and software development one would expect to be outdated but often still are in use today.

  • “How would we know a program exhibits intelligence?
    Alan Turing [1950]
  • “Make sure the program runs” and “Make sure the program solves the problem”
    Dan McCracken; Digital Computer Programming [1957]
  • “Errors plague us, but hidden errors make our job impossible. One of our recurring problems lies not in finding errors but in not finding errors. We must be alert for them; we can never be complacent about our results.”
    Herbert D. Leeds and Gerald M. Weinberg, [1961]
  • “Testing is the process of executing a program with the intent of finding errors.”
    Glenford J. Meyers; The art of software testing [1979]
  • “Test means execution, or set of executions, of the program for the purpose of measuring its performance. That a program was executed with no evidence of error is no proof that it contains no errors; program errors are sensitive to the specifics of the data being processed.” “The goal of testing ought to be the uncovering of defects within the program.”
    Robert Dunn and Richard Ullman, Quality Assurance for Computer Software, [1982]
  • “Testing is any activity aimed at evaluating an attribute of a program or system. Testing is the measurement of software quality.”
    Bill Hetzel, The Complete Guide to Software Testing [1983]
  • “Testing is the act of executing tests. Tests are designed and then executed to demonstrate correspondence between an element and its specification. There can be no testing without specifications of intentions.”
    Boris Beizer, Software System Testing and Quality Assurance [1984]
  • “The Testing stage involves full-scale use of the program in a live environment. It is here that the software and hardware are shaken down, anomalies of behavior are eliminated, and the documentation is updated to reflect final behavior. The testing must be as thorough as possible. The use of adversary roles at this stage is an extremely valuable tool because it ensures that the system works in as many circumstances as possible.”
    Henry Legard, Software Engineering Concepts: Volume 1 [1987]
  • “The purpose of testing a program is to find problems in it”
    Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software [1988]
  • “The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.”
    The 1990 IEEE Standard Glossary of Software Engineering Terminology [1990]
  • “Testing is the execution of software to determine where if functions incorrectly.”
    Robert L. Glass, Building Quality Software [1992]
  • “Testing helps detect defects that have escaped detection in the preceding phases of development. Here again, the key byproduct of testing is useful information about the number of and types of defects found in testing. Armed with this data, teams can begin to identify the root causes of these defects and eliminate them from the earlier phases of the software life cycle.”
    Lowell Jay Arthur, Improving Software Quality: An Insider’s Guide to TQM [1993]
  • “[Unit] Testing is a hard activity for most developers to swallow for several reasons: Testing’s goal runs counter to the goals of other development activities. The goal is to find errors. A successful test is one that breaks the software. … Testing can never prove the absence of errors — only their presence. … Testing by itself does not improve software quality. … Testing requires you to assume that you’ll find errors in your code. …”
    Steve McConnell, Code Complete: A Practical Handbook of Software Construction [1993]
  • “Testing is an unnecessary and unproductive activity if its sole purpose is to validate that the specifications were implemented as written. … testing as performed in most organizations is a process designed to compensate for an ineffective software development process. It is unrealistic to develop software and not test it. The perfect development process does not exist …”
    William Perry, Effective Methods for Software Testing [1995]
  • “A tester is given a false statement (‘the system works’) and has the job of selecting, from an infinite number of possibilities, an input that contradicts the statement. … [You want to find] the right counterexample with a minimum of wasted effort.”
    Brian Marick, The Craft of Software Testing: Subsystem Testing [1995]
  • “Testing is obviously concerned with errors, faults, failures, and incidents. A test is the act of exercising software with test cases. There are two distinct goals of a test: either to find failures, or to demonstrate correct execution.”
    Paul C. Jorgensen, Software Testing: A Craftsman’s Approach [1995]
  • “The penultimate objective of testing is to gather management information.”
    Boris Beizer, Black Box Software Testing: Techniques for Functional Testing of Software and Systems [1995]
  • “The most common quality-assurance practice is undoubtedly execution testing, finding errors by executing a program and seeing what it does.”
    Steve McConnell, Rapid Development: Taming Wild Software Schedules [1996]
  • “Software testing is the action of carrying out one or more tests, where a test is a technical operation that determines one or more characteristics of a given software element or system, according to a specified procedure. The means of software testing is the hardware and/or software and the procedures for its use, including the executable test suite used to carry out the testing.”
    NIST [1997]
  • “Software must be tested to have confidence that it will work as it should in its intended environment. Software testing needs to be effective at finding any defects which are there, but it should also be efficient, performing the tests as quickly and cheaply as possible.”
    Mark Fewster and Dorothy Graham, Software Test Automation: Effective use of test execution tools [1999]
  • “Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system.”
    James Bach, James Bach on Risk-Based Testing, STQE Magazine [1999]
  • “Testing is a process of planning, preparing, executing and evaluating, with the purpose to determine the attributes of an information system and to show the differences between its actual and demanded state” Pol, Teunissen, van Veenendaal, Testen volgens TMap 2e druk [2000]
  • “Software testing is a difficult endeavor that requires education, skill, practice, and experience. Building good testing strategies requires merging many different disciplines and techniques.”
    James A. Whittaker, IEEE Software (Vol 17, No 1) [2000]
  • “Software testing is the process of applying metrics to determine product quality. Software testing is the dynamic execution of software and the comparison of the results of that execution against a set of pre-determined criteria.”
    NIST The Economic Impacts of Inadequate Infrastructure for Software Testing [2002]
  • “Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information.”
    Cem Kaner, James Bach, Bret Pettichord,
    Lessons Learned In Software Testing: A Context-Driven Approach [2002]
  • “Testing is a concurrent lifecycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software being tested.”
    – Rick Craig and Stefan Jaskiel, Systematic Software Testing [2002]
  • “A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions.”
    Brett Pettichord, Don’t Become the Quality Police, StickyMinds.com [2002]
  • “Software testing is a process of analyzing or operating software for the purpose of finding bugs.”
    – Robert Culbertson, Chris Brown, and Gary Cobb, Rapid Testing [2002]
  • “How do you test your software? Write an automated test.
    Test is a verb meaning ‘to evaluate’. No software engineers release even the tiniest change without testing, except the very confident and the very sloppy. … Although you may test your changes, testing changes is not the same as having tests. Test is also a noun, ‘a procedure leading to acceptance or rejection’. Why does test the noun, a procedure that runs automatically, feel different from test the verb, such as poking a few buttons and looking at answers on the screen?”
    – Kent Beck, Test-Driven Development By Example [2003]
  • “My conclusion is that testing is an intellectual endeavor and not part of arts and crafts. So testing is not something anyone masters. Once you stop learning, your knowledge becomes obsolete very fast. Thus, to realize your testing potential, you must commit to continuous learning.”
    James A. Whittaker, How to Break Software: A Practical Guide to Testing [2003]
  • “All software has bugs. It’s a fact of life. So the goal of finding and removing all defects in a software product is a losing proposition and a dangerous objective for a test team, because such a goal can divert the test team’s attention from what is really important. … [The goal of a test team] is to ensure that among the defects found are all of those that will disrupt real production environments; in other words, to find the defects that matter.”
    Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,
    Software Testing Techniques: Finding the Defects that Matter [2004]
  • “Software Testing [is] The act of confirming that software design specifications have been effectively fulfilled and attempting to find software faults during execution of the software.”
    Thomas H. Faris, Safe And Sound Software: Creating an Efficient and Effective Quality System [2006]
  • “Checking is the inspection of intermediate products and development processes, or verification. These all the activities directed to answering the question: Is the building done rightly?
    Testing is the inspection of end products, or validation. Is the product valid against its demand? Is the right thing build?” (translation) Koomen, Pol Test Process Improvement [2006]
  • “Software testing is a process where we check a behavior we observe against a specified behavior the business expects. … in software testing, the tester should know what behavior to expect as defined by the business requirements. We agree on this defined, expected behavior and any user can observe this behavior.”
    Andreas Golze, Charlie Li, Shel Prince, Optimize Quality for Business Outcomes: A Practical Approach to Software Testing [2006]
  • “Testing is a process of gathering information by making observations and comparing them to expectations.” “A test is an experiment designed to reveal information, or answer a specific question, about the software or system.”
    Dale Emery and Elizabeth Hendrickson [2007]
  • “software testing as a process of technical investigation, an empirical study of the product under test with the goal of exposing quality-relating information about it. (Validation and verification both fit within this definition.)”
    LAWST [2007]
  • “[Testing is]  An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.  A process focused on empirically questioning a product or service about its quality.”
    Cem Kaner [2007]
  • “An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product OR SERVICE under test.”
    Scott Barber [2007]
  • “Testing is strategic tool for risk control”
    Egbert Bouman, SmarTEST [2008]
  • “The test process has to produce information about errors found and the degree to which the quality has been improved by solving them. In turn, the tests have to indicate whether the modified system responds to the reason that led to the change, and if it contributes to the business requirements.” de Grood, TestGoal [2008]
  • “Testing is a process that provides insight into, and advices about, quality and its related risks” Van der Aalst, Baarda, Roodenrijs, Vink and Visser, TMap NEXT Business Driven Test Management [2008]
  • “The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. ”
    Jeanne Hofmans and Erwin Pasmans, Quality Level Management [2012]
  • “interact with the software or system, observe its actual behavior, and compare that to your expectations”
    Elisabeth Hendrickson; ‘Explore It! [2013]
  • “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.”
    James Bach, Michael Bolton [2013] http://www.satisfice.com/blog/archives/856
  • “Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. The information is compared against specifications, business requirements, competitive products, past versions of this product, user expectations, industry standards, applicable laws and other criteria.”
    Undisclosed company policy [2015]
  • Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
    Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
    Cem Kaner, James Bach
  • “Software testing is a process of executing a program or application with the intent of finding the software bugs” and “the process of validating and verifying that a software program or application or product: Meets the business and technical requirements that guided it’s design and development; Works as expected; Can be implemented with the same characteristic”
    ISTQB Foundation Exam
  • “Testing is finding out how well something works”
    Origin unknown
  • “Testing is the process of evaluating a system or its component(s) with the intent to find out that whether it satisfies the specified requirements or not”
    Origin unknown
  • “Software testing is the process of evaluating a software item to detect differences between given input and expected output”
    Origin unknown
  • “Software testing is a process that should be done during the development process”
    Origin unknown

Reference test

Regression Testing

A while ago a new developer approached me to discuss some changes he wanted to deploy to the test environment. He had re-factored parts of the application workflow management and redesigned a number of reports. We talked about it and he stressed that these where major changes and a lot of work to test. While we continued discussing I commented “that’s okay it will take some time but we have a regression test set just for this that we can use to work with and that will help us…

He looked at me with total bewilderment and uttered: “Didn’t you get anything of what I said. These are major changes. No regression test will cover that! They will all fail!!

I looked at him and realized…

He was right

His understanding is based on a common perception where regression tests mean something like running the same step by step tests as before to see that they have the same step by step results as before. And if that were the case he would obviously be right. There was no way I could execute the tests the same way as before simply because the workflow method and GUI functionality had changed dramatically. Nor could the results be exactly the same given the design changes he had made.

But I was right too

His reaction made me realize that over the years my understanding of a regression test set has changed dramatically. I don’t see test cases as something that provides a step by step description. I see test cases as a doable and executable follow-up of a test idea. A test case to me consists of the following elements:

  • A test idea; what do you want to learn, verify/validate or investigate?
  • A feature; which part of the test object do you investigate?
  • Test activities; a way to exercise your test idea and see how the software behaves
  • Trigger data; data that should trigger the behavior you want to see
  • An oracle; something that tells you how to value the behaviors information/data

With that understanding I looked at the changes and had concluded that the existing test cases were still useful. Even if the workflow management was re-factored I could still apply the test ideas we had used before. In essence the same features were affected and my oracles should still be valid. To me adjusting the test activities and trigger data, in this case to match the workflow changes, is something I tend to do anyway. With these adjustments I try to make re-running test cases more effective as variation adds new chances of finding new or different bugs.

With the reports the differentiation was slightly different. Test ideas, features, test activities and trigger data could stay more or less the same but my oracles, the current template reports, had changed. The reports now had a new layout but their content had changed only minimally.

Reference test

To avoid future confusion I came up with a new name for my tests which, as far as I am aware, has not been used in software testing before. I will call these tests Reference Tests.

A Reference Test then is a test where under similar circumstances similar input results in a similar outcome of the test if evaluated against an identical oracle

I am aware that ‘similar’ and ‘identical’ in this definition are relative concepts and I have chosen them on purpose. It expresses that each time that such a test is used the tester needs to be aware which information it tries to show or uncover, how it is doing that and to what purpose. This discourages mindless repetition or pass/fail blindness. It encourages thoughtful selection and execution of tests and deliberate evaluation of test results.