Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claim Testing

  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)

Pilot

  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

A collection of quality characteristics

Following the earlier posts listing software testing and bug definitions this post has also a, very large, listing. This time it is a list of quality characteristics. Like the earlier posts this list of common definitions reflects views on software testing. But unlike the earlier posts every item in itself is also a different way to look at your software and its use and a way to divide the way you test yourself. Therein lies a challenge for you as a reader.

Would you be able to create a test idea for each and every one of them?
Or at least for those that matter for your software?

Probably not but go ahead and try anyway
and if you really can’t come with a test idea try to think why not.
Is it not applicable to your application?
Is it not to applicable your context?
Do you not know how?

Would that mean you might miss some valuable information about the software?

A lot of them may not apply directly to your current context but it is good to browse over them and pick the ones that are useful now and revisit them to re-evaluate your choices later and pick the (different) ones that apply then.

Accessibility

  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25064:201) Note: [ISO 9241-171:2008] Although “accessibility” typically addresses users who have disabilities, the concept is not limited to disability issues. The range of capabilities includes disabilities associated with age. Accessibility for people with disabilities can be specified or measured either as the extent to which a product or system can be used by users with specified disabilities to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by the presence of product properties that support accessibility [ISO 25063:2014] Context of use includes direct use or use supported by assistive technologies.
  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Capable of being reached, capable of being used or seen. (IAIDQ – Martin Eppler)
  • The characteristic of being able to access data when it is required. (IAIDQ – Larry P. English)

Accountability

  • Degree to which the actions of an entity can be traced uniquely to the entity (ISO/IEC 25010:2011)

Accuracy

  • Degree of conformity of a measure to a standard or a true value. Level of precision or detail. Activation: a term that designates activities that make information more applicable and current, and its delivery and use more interactive and faster; a process that increases the usefulness of information by making it more vivid and organising it in a way that it can be used directly without further repackaging. (IAIDQ – Martin Eppler)
  • The capability of the software product to provide the right or agreed results or effects with the needed degree of precision (ISTQB Glossary 2015)

Accuracy to reality

  • A characteristic of information quality measuring the degree to which a data value (or set of data values) correctly represents the attributes of the real-world object or event. (IAIDQ – Larry P. English)

Accuracy to surrogate source

  • A measure of the degree to which data agrees with an original, acknowledged authoritative source of data about a real world object or event, such as a form, document, or unaltered electronic data received from outside the organisation. See also Accuracy. (IAIDQ – Larry P. English)

Adaptability

  • Degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments (ISO/IEC 25010:2011) Note: Adaptability includes the scalability of internal capacity, such as screen fields, tables, transaction volumes, and report formats. Adaptations include those carried out by specialized support staff, business or operational staff, or end users. If the system is to be adapted by the end user, adaptability corresponds to suitability for individualization as defined in ISO 9241-110. See also: flexibility
  • The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered (ISTQB Glossary 2015)

Analyzability

  • Degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of an intended change to one or more of its parts, or to diagnose a product for deficiencies or causes of failures, or to identify parts to be modified (ISO/IEC 25010:2011) Note: Implementation can include providing mechanisms for the product or system to analyze its own faults and provide reports before or after a failure or other event. Syn: analysability See also: modifiability
    The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified (ISTQB Glossary 2015)

Applicability

  • The characteristic of information to be directly useful for a given context, information that is organised for action. (IAIDQ – Martin Eppler)

Appropriateness recognizability

  • Degree to which users can recognize whether a product or system is appropriate for their needs (ISO/IEC 25010:2011)

Attractiveness

  • The capability of the software product to be attractive to the user (ISTQB Glossary 2015)

Authenticity

  • Degree to which the identity of a subject or resource can be proved to be the one claimed (ISO/IEC
    25010:2011)

Availability

  • Ability of a service or service component to perform its required function at an agreed instant or over an agreed period of time (ISO/IEC/IEEE 24765c:2014)
  • The degree to which a system or component is operational and accessible when required for use (ISO/IEC 25010:2011) Note: Availability is normally expressed as a ratio or percentage of the time that the service or service component is actually available for use by the customer to the agreed time that the service should be available. Availability is a combination of maturity (which reflects the frequency of failure), fault tolerance and recoverability (which reflect the length of downtime following each failure). See also: error tolerance, fault tolerance, reliability, robustness
  • A percentage measure of the reliability of a system indicating the percentage of time the system or data is accessible or usable, compared to the amount of time the system or data should be accessible or usable. (IAIDQ – Larry P. English)
  • The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. (ISTQB Glossary 2015)

Benchmark

  • Standard against which results can be measured or assessed (ISO/IEC 25010:2011)
  • Procedure, problem, or test that can be used to compare systems or components to each other or to a standard (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Reference point against which comparisons can be made (ISO/IEC 29155-1:2011)

Capability

  • Can the productperform valuable functions?
    • Completeness: all important functions wanted by end users are available.
    • Accuracy: any output or calculation in the product is correct and presented with significant digits.
    • Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
    • Interoperability: different features interact with each other in the best way.
    • Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
    • Data agnosticism: supports all possible data formats, and handles noise.
    • Extensibility: ability for customers or 3rd parties to add features or change behavior.

Capacity

  • Degree to which the maximum limits of a product or system parameter meet requirements (ISO/IEC 25010:2011) Note: Parameters can include the number of items that can be stored, the number of concurrent users, the communication bandwidth, throughput of transactions, and size of database.

Changeability

  • The capability of the software product to enable specified modifications to be implemented. (ISTQB Glossary 2015)

Charisma

  • Does the product have “it”?
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Satisfaction: how do you feel after using the product?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
    • Curiosity: will users get interested and try out what they can do with the product?
    • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
    • Hype: should the product use the latest and greatest technologies/ideas?
    • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
    • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
    • Directness: are (first) impressions impressive?
    • Story: are there compelling stories about the product’s inception, construction or usage? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Clarity

  • Void of obscure language or expression, ease of understanding, interpretability. (IAIDQ – Martin Eppler)

Co-existence

  • Degree to which a product can perform its required functions efficiently while
    environment and resources with other products, without detrimental impact on any other product (ISO/IEC 25010:2011) Syn: coexistence

Comfort

  • Degree to which the user is satisfied with physical comfort (ISO/IEC 25010:2011)

Compatibility

  • Degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment (ISO/IEC 25010:2011)
  • The ability of two or more systems or components to exchange information (ISO/IEC/IEEE 24765:2010)
  • The capability of a functional unit to meet the requirements of a specified interface without appreciable modification (ISO/IEC 2382-1:1993)
  • How well does the product interact with software and environments?
    • Hardware Compatibility: the product can be used with applicable configurations of hardware components.
    • Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
    • Application Compatibility: the product, and its data, works with other applications customers are likely to use.
    • Configuration Compatibility: product’s ability to blend in with configurations of the environment.
    • Backward Compatibility: can the product do everything the last version could?
    • Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
    • Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
    • Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Completeness

  • A characteristic of information quality measuring the degree to which all required data is known.
    • Fact completeness is a measure of data definition quality expressed as a percentage of the attributes about an entity type that need to be known to assure that they are defined in the model and implemented in a database. For example, “80 percent of the attributes required to be known about customers have fields in a database to store the attribute values.”
    • Value completeness is a measure of data content quality expressed as a percentage of the columns or fields of a table or file that should have values in them, in fact do so. For example, “95 percent of the columns for the customer table have a value in them.” Also referred to as Coverage.
    • Occurrence completeness is a measure of the percent of records in an information collection that it should have to represent all occurrences of the real world objects it should know. For example, does a Department of Corrections have a record for each Offender it is responsible to know about? (IQ). (IAIDQ – Larry P. English)

Complexity

  • The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. (ISTQB Glossary 2015)

Compliance

  • The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. (ISTQB Glossary 2015)

Component

  • An entity with discrete structure, such as an assembly or software module, within a system considered at a particular level of analysis (ISO/IEC 19770-2:2009)
  • One of the parts that make up a system (IEEE 1012-2012)(IEEE 829)
  • Object that encapsulates its own template, so that the template can be interrogated by interaction with the component (ISO/IEC 10746-2:2009)
  • Specific, named collection of features that can be described by an IDL component definition or a corresponding structure in an interface repository (ISO/IEC 19500-3:2012)
  • Functionally or logically distinct part of a system (ISO/IEC 19506:2012) Note: A component may be hardware or software and may be subdivided into other components. Component refers to a part of a whole, such as a component of a software product or a component of a software identification tag. The terms module, component, and unit are often used interchangeably or defined to be subelements of one another in different ways depending upon the context. The relationship of these terms is not yet standardized. A component may or may not be independently managed from the end-user or administrator’s point of view.

Comprehensiveness

  • The quality of information to cover a topic to a degree or scope that is satisfactory to the information user. (IAIDQ – Martin Eppler)

Conciseness

  • Marked by brevity of expression or statement, free from all elaboration and superfluous detail. (IAIDQ – Martin Eppler)

Concurrency

  • A characteristic of information quality measuring the degree to which the timing of equivalence of data is stored in redundant or distributed database files. The measure data concurrency may describe the minimum, maximum, and average information float time from when data is available in one data source and when it becomes available in another data source. Or it may consist of the relative percent of data from a data source that is propagated to the target within a specified time frame. (IAIDQ – Larry P. English)

Confidentiality

  • Degree to which a product or system ensures that data are accessible only to those authorized to have access (ISO/IEC 25010:2011)

Connectivity

  • The ease with which a link with a different information system or within the information system can be made and modified. (TMap Next)

Consistency

  • A measure of information quality expressed as the degree to which a set of data is equivalent in redundant or distributed databases. (IAIDQ – Larry P. English)
  • The condition of adhering together, the ability to be asserted together without contradiction. (IAIDQ – Martin Eppler)

Context completeness

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in all the specified contexts of use (ISO/IEC 25010:2011) Note: Context completeness can be specified or measured either as the degree to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in all the intended contexts of use, or by the presence of product properties that support use in all the intended contexts of use.

Context coverage

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in both specified contexts of use and in contexts beyond those initially explicitly identified (ISO/IEC 25010:2011) Note: Context of use is relevant to both quality in use and some product quality (sub) characteristics.

Continuity

  • The certainty that data processing will continue uninterruptedly, which means that it can be resumed within a reasonable period of time even after serious interruptions. (TMap Next)

Controlability

  • The ease with which the correctness and completeness of the information (in the course of time) can be checked. (TMap Next)

Convenience

  • The ease-of-use or seamlessness by which information is acquired. (IAIDQ – Martin Eppler)

Correctness

  • The functionality matches the specification. (McCall, 1977)
  • Conforming to an approved or conventional standard, conforming to or agreeing with fact, logic, or known truth. (IAIDQ – Martin Eppler)

Currency

  • A characteristic of information quality measuring the degree to which data represents reality from the required point in time. For example, one information view may require data currency to be the most up-to-date point, such as stock prices for stock trades, while another may require data to be the last stock price of the day, for stock price running average. (IAIDQ – Larry P. English)
  • The quality or state of information of being up-to-date or not outdated. (IAIDQ – Martin Eppler)

Data deficiency

  • An unconformity between the view of the real-world system that can be inferred from a representing information system and the view that can be obtained by directly observing the real-world system. (IAIDQ – Martin Eppler)

Database integrity

  • The characteristic of data in a database in which the data conforms to the physical integrity constraints, such as referential integrity and primary key uniqueness, and is able to be secured and recovered in the event of an application, software, or hardware failure. Database integrity does not imply data accuracy or other information quality characteristics not able to be provided by the DBMS functions. (IAIDQ – Larry P. English)

Degradation possibilities

  • The ease with which the core of the information system can continue after a part has failed. (TMap Next)

Ease-of-use

  • The quality of an information environment to facilitate the access and manipulation of information in a way that is intuitive. (IAIDQ – Martin Eppler)

Economic risk mitigation

  • Degree to which a product or system mitigates the potential risk to financial status, efficient operation, commercial property, reputation, or other resources in the intended contexts of use (ISO/IEC 25010:2011)

Effectiveness

  • The capability of producing an intended result. (ISTQB Glossary 2015)

Efficiency

  • System resource (including cpu, disk, memory, network) usage. (McCall, 1977)
  • Optimum use of system resources during correct execution. (Boehm, 1978)
  • A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
    • Time behavior; response times for a given thru put, i.e. transaction rate.
    • Resource behavior; resources used, i.e. memory, cpu, disk and network usage. (ISO-9126)
  • The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions. (ISTQB Glossary 2015)

Entity integrity

  • The assurance that a primary key value will identify no more than one occurrence of an entity type, and that no attribute of the primary key may contain a null value. Based on this premise, the real-world entities are uniquely distinguishable from all other entities. (IAIDQ – Larry P. English)

Environmental risk mitigation

  • Degree to which a product or system mitigates the potential risk to property or the environment in the intended contexts of use (ISO/IEC 25010:2011)

External measure of software quality

  • Measure of the degree to which a software product enables the behavior of a system under specified conditions to satisfy stated and implied needs for the system (ISO/IEC 25010:2011) Note: Attributes of the behavior can be verified or validated by executing the software product during testing and operation. See also: external software quality, internal measure of software quality

Extensibility

  • The ability to dynamically augment a database (or data dictionary) schema with knowledge worker-defined data types. This includes addition of new data types and class definitions for representation and manipulation of unconventional data such as text data, audio data, image data, and data associated with artificial intelligence applications. (IAIDQ – Larry P. English)

Fault tolerance

  • The ability of a system or component to continue normal operation despite the presence of Hardware or software faults (ISO/IEC 25010:2011)
  • Pertaining to the study of errors, faults, and failures, and of methods for enabling systems to continue normal operation in the presence of faults (ISO/IEC/IEEE 24765:2010) See also: error tolerance, fail safe, fail soft, fault secure, robustness

Flexibility

  • The ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in contexts beyond those initially specified in the requirements (ISO/IEC 25010:2011 ) Note: Flexibility enables products to take account of circumstances, opportunities and individual preferences that had not been anticipated in advance. If a product is not designed for flexibility, it might not be safe to use the product in unintended contexts. Flexibility can be measured either as the extent to which a product can be used by additional types of users to achieve additional types of goals with effectiveness, efficiency, freedom from risk and satisfaction in additional types of contexts of use, or by a capability to be modified to support adaptation for new types of users, tasks and environments, and suitability for individualization. See also: adaptability, extendibility, maintainability
  • The ability to make changes required as dictated by the business. (McCall, 1977)
  • The ease of changing the software to meet revised requirements. (Boehm 1978)
  • A characteristic of information quality measuring the degree to which the information architecture or database is able to support organisational or process reengineering changes with minimal modification of the existing objects and relationships, only adding new objects and relationships. (IAIDQ – Larry P. English)
  • The degree to which the user may introduce extensions or modifications to the information system without changing the software itself. (TMap Next)

Freedom from risk

  • Degree to which a product or system mitigates the potential risk to economic status, human life, health, or the environment (ISO/IEC 25010:2011)

Functional appropriateness

  • Degree to which the functions facilitate the accomplishment of specified tasks and objectives (ISO/IEC 25010:2011) Note: Functional appropriateness corresponds to suitability for the task.

Functional completeness

  • Degree to which the set of functions covers all the specified tasks and user objectives (ISO/IEC 25010:2011)

Functional correctness

  • Degree to which a product or system provides the correct results with the needed degree of precision (ISO/IEC 25010:2011)

Functional suitability

  • Degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions (ISO/IEC 25010:2011) Note: Functional Suitability is only concerned with whether the functions meet stated and implied needs, not the functional specification.

Functionality

  • A set of attributes that bear onthe existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
    • Suitability; the appropriateness (to specification) of the functions of the software.
    • Accuratness; the correctness of the functions
    • Interoperatebility; the ability of a software component to interact with other components or systems.
    • Compliance; the compliant capability of software.
    • Security; unauthorized access to the software functions. (ISO-9126)
  • Functionality
    • The value added purpose of the product. Also…
    • Connectivity – protocols (e.g. Bluetooth), or re-sync of offline clients
    • Interoperability – inter-app platform and language independence
    • Extensibility, Expandability – plugins, late binding
    • Composability – service or message oriented considerations, governance
    • Manageability – administration of fielded product
    • Licensing (FURPS+)

Health and safety risk mitigation

  • Degree to which a product or system mitigates the potential risk to people in the intended contexts of use (ISO/IEC 25010:2011)

Information quality

  • Consistently meeting all knowledge worker and end-customer expectations in all quality characteristics of the information products and services required to accomplish the enterprise mission (internal knowledge worker) or personal objectives (end customer). (IAIDQ – Larry P. English)
  • The degree to which information consistently meets the requirements and expectations of all knowledge workers who require it to perform their processes. (IAIDQ – Larry P. English)
  • The fitness for use of information; information that meets the requirements of its authors, users, and administrators. (IAIDQ – Martin Eppler)

Immunity

  • Degree to which a product or system is resistant to attack (ISO/IEC 25010:2011) See also: integrity

Indirect user

  • Person who receives output from a system, but does not interact with the system (ISO/IEC 25010:2011) See also: direct user, secondary user

Install ability

  • Degree of effectiveness and efficiency with which a product or system can be successfully installed or uninstalled in a specified environment (ISO/IEC 25010:2011)
  • The capability of the software product to be installed in a specified environment. (ISTQB Glossary 2015)

Integrity

  • Degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data (ISO/IEC 25010:2011) See also: immunity
  • Protection from unauthorized access. (McCall, 1977)

Interactivity

  • The capacity of an information system to react to the inputs of information consumers, to generate instant, tailored responses to a user’s actions or inquiries. Interpretation : the process of assigning meaning to a constructed representation of an object or event. (IAIDQ – Martin Eppler)

Internal measure of software quality

  • Measure of the degree to which a set of static attributes of a software product satisfies stated and implied needs for the software product to be used under specified conditions (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: Static attributes include those that relate to the software architecture, structure and its components. Static attributes can be verified by review, inspection, simulation, or automated tools. See also: external measure of software quality

Interoperability

  • Degree to which two or more systems, products or components can exchange information and use the information that has been exchanged (ISO/IEC 25010:2011)
  • The ability for two or more ORBs to cooperate to deliver requests to the proper object (ISO/IEC 19500-2:2012)
  • The capability to communicate, execute programs, and transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units. (ISO/IEC 2382-1:1993)
  • Capability of objects to collaborate, that is, the capability mutually to communicate information in order to exchange events, proposals, requests, results, commitments and flows (ISO/IEC 10746-2:2009) Note: Interoperability is used in place of compatibility in order to avoid possible ambiguity with replace ability. See also: compatibility
  • The extent, or ease, to which software components work together. (McCall, 1977)
  • The capability of the software product to interact with one or more specified components or systems. (ISTQB Glossary 2015)

IT-bility

  • Is the product easy to install,maintain and support?
    • System requirements: ability to run on supported configurations, and handle different environments or missing components.
    • Installability: product can be installed on intended platforms with appropriate footprint.
    • Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
    • Uninstallation: are all files (except user’s or system files) and other resources removed when uninstalling?
    • Configuration: can the installation be configured in various ways or places to support customer’s usage?
    • Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
    • Maintainability: are the product and its artifacts easy to maintain and support for customers?
    • Testability: how effectively can the deployed product be tested by the customer? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Learnability

  • Degree to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Can be specified or measured either as the extent to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by product properties corresponding to suitability for learning as defined in ISO 9241-110.
  • The quality of information to be easily transformed into knowledge. (IAIDQ – Martin Eppler)
  • The capability of the software product to enable the user to learn its application. (ISTQB Glossary 2015)

Maintainability

  • Ease with which a software system or component can be modified to change or add capabilities, correct faults or defects, improve performance or other attributes, or adapt to a changed environment (ISO/IEC/IEEE 24765:2010)
  • Ease with which a hardware system or component can be retained in, or restored to, a state in which it can perform its required functions (ISO/IEC/IEEE 24765:2010)
  • Capability of the software product to be modified (IEEE 14764-2006)
  • Average effort required to locate and fix a software failure (ISO/IEC/IEEE 24765:2010)
  • Speed and ease with which a program can be corrected or changed (IEEE 982.1-2005)
  • Degree of effectiveness and efficiency with which a product or system can be modified by the intended maintainers (ISO/IEC 25010:2011) Note: Maintainability includes installation of updates and upgrades. Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications. Modifications include those carried out by specialized support staff, and those carried out by business or operational staff, or end users. See also: extendability, flexibility
  • Can the product be maintained and extended atlow cost?
    • Flexibility: the ability to change the product as required by customers.
    • Extensibility: will it be easy to add features in the future?
    • Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
    • Readability: the code is adequately documented and easy to read and understand.
    • Transparency: Is it easy to understand the underlying structures?
    • Modularity: the code is split into manageable pieces.
    • Refactorability: are you satisfied with the unit tests?
    • Analyzability: ability to find causes for defects or other code of interest.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to find and fix a defect (McCall, 1977)
  • The characteristic of an information environment to be manageable at reasonable costs in terms of content volume, frequency, quality, and infrastructure. If a system is maintainable, information can be added, deleted, or changed efficiently. (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed to make specifiedmodifications.
    • Analyzability; the ability to identify the root cause of a failure within the software.
    • Changeability; the sensitivity to change of a given system that is the negative impact that may be caused by system changes.
    • Testability; the effort needed to verify (test) a system change. (ISO-9126)
  • The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. (ISTQB Glossary 2015)
  • The ease of adapting the information system to new demands from the user, to changing external environments, or in order to correct defects. (TMap Next)

Manageability

  • The ease with which to get and keep the information system in its operational state. (TMap Next)

Maturity

  • Degree to which a system, product or component meets needs for reliability under normal operation (ISO/IEC 25010:2011) Note: The concept of maturity can be applied to quality characteristics to indicate the degree to which they meet required needs under normal operation.
  • The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (ISTQB Glossary 2015)
  • The capability of the software product to avoid failure as a result of defects in the software. (ISTQB Glossary 2015)

Modifiability

  • Ease with which a system can be changed without introducing defects (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be effectively and efficiently modified without introducing defects or degrading existing product quality (ISO/IEC 25010:2011) see also: analyzability, maintainability, and modularity

Modularity

  • Degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components (ISO/IEC 25010:2011)
  • Software attributes that provide a structure of highly independent components (ISO/IEC/IEEE 24765:2010) See also: cohesion, coupling, and modifiability

Non-repudiation

  • Degree to which actions or events can be proven to have taken place, so that the events or actions cannot be repudiated later (ISO/IEC 25010:2011)
  • The ability to provide proof of transmission and receipt of electronic communication. (IAIDQ – Larry P. English)

Operability

  • Degree to which a product or system has attributes that make it easy to operate and control (ISO/IEC 25010:2011) Note: Operability corresponds to controllability, (operator) error tolerance, and conformity with user expectations as defined in ISO 9241-110.

Operational reliability

  • The degree to which the information system remains free from interruptions. (TMap Next)

Performance

  • Is the product fast enough?
    • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
    • Resource Utilization: appropriate usage of memory, storage and other resources.
    • Responsiveness: the speed of which an action is (perceived as) performed.
    • Availability: the system is available for use when it should be.
    • Throughput: the products ability to process many, many things.
    • Endurance: can the product handle load for a long time?
    • Feedback: is the feedback from the system on user actions appropriate?
    • Scalability: how well does the product scale up, out or down? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. (ISTQB Glossary 2015)

Performance efficiency

  • Performance relative to the amount of resources used under stated conditions
    (ISO/IEC 25010:2011) Note: Resources can include other software products, the software and hardware configuration of the system, and materials (e.g. print paper, storage media).

Pleasure

  • Degree to which a user obtains pleasure from fulfilling personal needs (ISO/IEC 25010:2011) Note: Personal needs can include needs to acquire new knowledge and skills, to communicate personal identity and to provoke pleasant memories.

Portability

  • Ease with which a system or component can be transferred from one hardware or software environment to another (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) (2) capability of a program to be executed on various types of data processing systems without converting the program to a different language and with little or no modification (ISO/IEC 2382-1:1993)
  • Degree of effectiveness and efficiency with which a system, product, or component can be transferred from one hardware, software or other operational or usage environment to another (ISO/IEC 25010:2011)
  • Property that the reference points of an object allow it to be adapted to a variety of configurations (ISO/IEC 10746-2:2009) Syn: transportability See also: machine-independent
  • Is transferring of the product to different environments enabled?
    • Reusability: can parts of the product be re-used elsewhere?
    • Adaptability: is it easy to change the product to support a different environment?
    • Compatibility: does the product comply with common interfaces or official standards?
    • Internationalization: it is easy to translate the product.
    • Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
    • User Interface-robustness: will the product look equally good when translated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to transfer the software from one environment to another. (McCall, 1977)
  • The extent to which the software will work under different computer configurations (i.e. operating systems, databases etc.). (Boehm, 1978)
  • A set of attributes that bear onthe ability of software to be transferred from one environment to another.
    • Adaptability; the ability of the system to change to new specifications or operating environments.
    • Installability; the effort required to install the software.
    • Conformance
    • Replaceability; how easy is it to exchange a given software component within a specified environment. (ISO-9126)
  • The ease with which the software product can be transferred from one hardware or software environment to another. (ISTQB Glossary 2015)
  • The diversity of the hardware and software platforms on which the information system can run, and how easy it is to transfer the system from one environment to another. (TMap Next)

Possibility of diversion

  • The ease with which (part of) the information system can continue elsewhere. (TMap Next)

Quality

  • Degree to which a system, component, or process meets specified requirements (IEEE 829-2008)
  • Ability of a product, service, system, component, or process to meet customer or user needs, expectations, or requirements (ISO/IEC/IEEE 24765:2010)
  • Degree to which the system satisfies the stated and implied needs of its various stakeholders, and thus provides value (ISO/IEC 25010:2011)
  • Degree to which a system, component, or process meets customer or user needs or expectations (IEEE 829-2008)
  • The degree to which a set of inherent characteristics fulfills requirements (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)

Quality in use (measure)

  • Extent to which a product used by specific users meets their needs to achieve specific goals with effectiveness, productivity, safety and satisfaction in specific contexts of use (ISO/IEC 25000:2014)
  • Degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: This definition of quality in use is similar to the definition of usability in ISO 9241-11. Before the product is released, quality in use can be specified and measured in a test environment designed and used exclusively by the intended users for their goals and contexts of use, e.g. User Acceptance Testing Environment. See also: usability

Quality measure

  • Measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25010:2011)
  • Derived measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25021:2012) Syn: QM See also: software quality measure

Quality measure element (QME)

  • Measure defined in terms of a property and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25000:2014) (ISO/IEC 25021:2012)
  • Measure defined in terms of an attribute and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25010:2011) Note: The software quality characteristics or sub characteristics of the entity are derived afterwards by calculating a software quality measure.

Quality property

  • Measurable component of quality (ISO/IEC 25010:2011)

Recoverability

  • Degree to which, in the event of an interruption or a failure, a product or system can recover the data directly affected and re-establish the desired state of the system (ISO/IEC 25010:2011) See also: survivability
  • The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. (ISTQB Glossary 2015)
  • The ease and speed with which the information system can be restored after an interruption. (TMap Next)

Reliability

  • The ability of a system or component to perform its required functions under stated conditions for a specified period of time (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time (ISO/IEC 25010:2011) Note: Dependability characteristics include availability and its inherent or external influencing factors, such as availability, reliability (including fault tolerance and ecoverability), security (including confidentiality and integrity), maintainability, durability, and maintenance support. Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation, or due to contextual changes. See also: availability, MTBF
  • Can you trust the product in many and difficult situations?
    • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
    • Robustness: the product handles foreseen and unforeseen errors gracefully.
    • Stress handling: how does the system cope when exceeding various limits?
    • Recoverability: it is possible to recover and continue using the product after a fatal error.
    • Data Integrity: all types of data remain intact throughout the product.
    • Safety: the product will not be part of damaging people or possessions.
    • Disaster Recovery: what if something really, really bad happens?
    • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The extent to which the system fails. (McCall, 1977)
  • The extent to which the software performs as required, i.e. the absence of defects. (Boehm, 1978)
  • A set of attributes that bear onthe capability of software tomaintain its level of performance under stated conditions for a statedperiod of time.
    • Maturity; the frequency of failure of the software.
    • Fault tolerance; the ability of software to withstand (and recover) from component, or environmental, failure.
    • Recoverability; ability to bring back a failed system to full operation, including data and network connections. (ISO-9126)
    • Reliability
    • Accuracy – the correctness of output
    • Availability – mean time between failures
    • Recoverability – from partial system failures
    • Verifiability – (contractual) runtime reporting on system health
    • Survivability – continuous operations through disasters (earthquake, war, etc.) (FURPS+)
  • The characteristic of an information infrastructure to store and retrieve information in an accessible, secure, maintainable, and fast manner. (IAIDQ – Martin Eppler)
  • The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. (ISTQB Glossary 2015)

Replace ability

  • Degree to which a product can replace another specified software product for the same purpose in the same environment (ISO/IEC 25010:2011) Note: Replace ability of a new version of a software product is important to the user when upgrading. Replace ability will reduce lock-in risk, so that other software products can be used in place of the present one, See also: adaptability, install ability
  • The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. (ISTQB Glossary 2015)

Resource 

  • Degree to which the amounts and types of resources used by a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011) Note: Human resources are included as part of efficiency. See also: efficiency

Reusability

  • Degree to which an asset can be used in more than one system, or in building other assets (IEEE 1517-2010)
  • In a reuse library, the characteristics of an asset that make it easy to use in different contexts, software systems, or in building different assets (IEEE 1517-2010) See also: generality
  • The ease of using existing software components in a different context. (McCall, 1977)
  • The degree to which parts of the information system, or the design, can be reused for the development of different applications. (TMap Next)

Risk

  • An uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)
  • Combination of the probability of an abnormal event or failure and the consequence(s) of that event or failure to a system’s components, operators, users, or environment. (IEEE 1012-2012)
  • Combination of the probability of an event and its consequence (ISO/IEC 16085:2006)
  • Measure that combines both the likelihood that a system hazard will cause an accident and the severity of that accident. (IEEE 1228-1994 (R2002))
  • Function of the probability of occurrence of a given threat and the potential adverse consequences of that threat’s occurrence (ISO/IEC 25010:2011)
  • Combination of the probability of occurrence and the consequences of a given future undesirable event (IEEE 1012-2012) Note: See ISO/IEC Guide 51 for issues related to safety.

Robustness

  • The degree to which the information system proceeds as usual even after an interruption. (TMap Next)

Satisfaction

  • Freedom from discomfort and positive attitudes towards the use of the product (ISO/IEC 25062:2006)
  • User’s subjective response when using the product (ISO/IEC 26513:2009)
  • Degree to which user needs are satisfied when a product or system is used in a specified context of use (ISO/IEC 25010:2011)

Scalability

  • The capability of the software product to be upgraded to accommodate increased loads. (ISTQB Glossary 2015)

Security

  • Protection of information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them (ISO/IEC 12207:2008)
  • The protection of computer hardware or software from accidental or malicious access, use, modification, destruction, or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations. (IEEE 1012-2012)
  • All aspects related to defining, achieving, and maintaining confidentiality, integrity, availability, non-repudiation, accountability, authenticity, and reliability of a system (ISO/IEC 15288:2008)
  • Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization (ISO/IEC 25010:2011) Note: Security also pertains to personnel, data, communications, and the physical protection of computer installations.
  • Does the product protect against unwanted usage?
    • Authentication: the product’s identifications of the users.
    • Authorization: the product’s handling of what an authenticated user can see and do.
    • Privacy: ability to not disclose data that is protected to unauthorized users.
    • Security holes: product should not invite to social engineering vulnerabilities.
    • Secrecy: the product should under no circumstances disclose information about the underlying systems.
    • Invulnerability: ability to withstand penetration attempts.
    • Virus-free: product will not transport virus, or appear as one.
    • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
    • Compliance: security standards the product adheres to. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Security: Confidentiality Preservation, Access Control, Non-repudiation (Integrity Verification, Authenticity Verification – PKI), Identity Verification (logon paradigm), Availability of Service, Auditing Evidence. (FURPS+)
  • Testing to determine the security of the software product. (ISTQB Glossary 2015)
  • The certainty that data can be viewed and changed only by those who are authorized to do so. (TMap Next)

Software quality

  • Capability of a software product to satisfy stated and implied needs when used under specified conditions (ISO/IEC 25000:2014)
  • Degree to which a software product satisfies stated and implied needs when used under specified conditions (ISO/IEC 25010:2011)
  • Degree to which a software product meets established requirements (IEEE 730-2014) Note: Quality depends upon the degree to which the established requirements accurately represent stakeholder needs, wants, and expectations. This definition differs from the ISO 9000:2000 quality definition mainly because the software quality definition refers to the satisfaction of stated and implied needs, while the ISO 9000 quality definition refers to the satisfaction of requirements. In SQuaRE standards software quality has the same meaning as software product quality.

Software quality requirement

  • Requirement that a software quality attribute be present in software (ISO/IEC 25010:2011)

Stability

  • The capability of the software product to avoid unexpected effects from modifications in the software. (ISTQB Glossary 2015)

Suitability

  • The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. (ISTQB Glossary 2015)

Suitability of infrastructure

  • The suitability of hardware, network, systems software and DBMS for the application concerned and the degree to which the elements of this infrastructure interrelate. (TMap Next)

Supportability

  • Supportability
    • Maintainability (i.e. “build-time” issues)
      • Testability – at unit, integration, and system levels
      • Buildability – fast build times, versioning robustness
      • Portability – minimal vendor or platform dependency
      • Reusability – of components
      • Brandability – OEM and partner support
      • Internationalization – prep for localization
    • Serviceability (i.e. “run-time” issues)
      • Continuity – administrative downtime constraints
      • Configurability/Modifiability – of fielded product
      • Installability, Updateability – ensuring application integrity
      • Deployability – mode of distributing updates
      • Restorability – from archives
      • Logging – of event or debug data (FURPS+)
  • Can customers’ usage and problems be supported?
    • Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
    • Diagnostics: is it possible to find out details regarding customer situations?
    • Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
    • Debugging: can you observe the internal states of the software when needed?
    • Versatility: ability to use the product in more ways than it was originally designed for. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Survivability

  • Degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks (ISO/IEC 25010:2011) See also: recoverability

Testability

  • Extent to which an objective and feasible test can be designed to determine whether a requirement is met (ISO/IEC 12207:2008)
  • Degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met (IEEE 1233-1998 (R2002))
  • Degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met (ISO/IEC/IEEE 24765:2010)
  • Degree of effectiveness and efficiency with which test criteria can be established for a system, product, or component and tests can be performed to determine whether those criteria have been met (ISO/IEC 25010:2011)
  • Is it easy to check and test the product?
    • Traceability: the product logs actions at appropriate levels and in usable format.
    • Controllability: ability to independently set states, objects or variables.
    • Observability: ability to observe things that should be tested.
    • Monitorability: can the product give hints on what/how it is doing?
    • Isolateability: ability to test a part by itself.
    • Stability: changes to the software are controlled, and not too frequent.
    • Automation: are there public or hidden programmatic interface that can be used?
    • Information: ability for testers to learn what needs to be learned…
    • Auditability: can the product and its creation be validated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to Validate the software requirements. (McCall, 1977)
  • Ease of validation, that the software meets the requirements. (Boehm, 1978)
  • The ease with which the functionality and performance level of the system (after each modification) can be tested and how fast this can be done. (TMap Next)

Time behavior

  • Degree to which the response and processing times and throughput rates of a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011)

Timeliness

  • A characteristic of information quality measuring the degree to which data is available when knowledge workers or processes require it. (IAIDQ – Larry P. English)
  • Coming early or at the right, appropriate or adapted to the times or the occasion. (IAIDQ – Martin Eppler)

Traceability

  • The ability to identify related items in documentation and software, such as requirements with associated tests. (ISTQB Glossary 2015)

Trust

  • Degree to which a user or other stakeholder has confidence that a product or system will behave as intended (ISO/IEC 25010:2011)

Understandability

  • The extent to which the software is easily comprehended with regard to purpose and structure. (Boehm, 1978)
  • The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. (ISTQB Glossary 2015)

Usability

  • Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25064:2013)
  • Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Usability can either be specified or measured as a product quality characteristic in terms of its sub characteristics, or specified or measured directly by measures that are a subset of quality in use. See also: reusability
  • Usability. Is the product easy to use?
    • Affordance: product invites to discover possibilities of the product.
    • Intuitiveness: it is easy to understand and explain what the product can do.
    • Minimalism: there is nothing redundant about the product’s content or appearance.
    • Learnability: it is fast and easy to learn how to use the product.
    • Memorability: once you have learnt how to do something you don’t forget it.
    • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
    • Operability: an experienced user can perform common actions very fast.
    • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
    • Control: the user should feel in control over the proceedings of the software.
    • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
    • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
    • Consistency: behavior is the same throughout the product, and there is one look & feel.
    • Tailorability: default settings and behavior can be specified for flexibility.
    • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
    • Documentation: there is a Help that helps, and matches the functionality.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Ease of use. (McCall, 1977) (Boehm, 1978)
  • Usability
    • Ergonomics – human factors engineering
    • Look and Feel – along with branding instancing
    • Accessibility – special needs accommodation
    • Localization – adding language resources
    • Documentation (FURPS+)
  • The characteristic of an information environment to be user-friendly in all its aspects (easy to learn, use, and remember). (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
    • Understandability; the ease of which the systems functions can be understood
    • Learnability; learning effort for different users, i.e. novice, expert, casual etc.
    • Operability; ability of the software to be easily operated by a given user in a given environment.
    • Attractiveness; (ISO-9126)
  • The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. (ISTQB Glossary 2015)

Usefulness

  • Degree to which a user is satisfied with perceived achievement of pragmatic goals, including the results of use and the consequences of use (ISO/IEC 25010:2011)
  • The quality of having utility and especially practical worth or applicability. (IAIDQ – Martin Eppler)

User

  • Individual or organization that uses the system or software to perform a specific function (ISO/IEC 25000:2014)
  • Person who interacts with a system, product or service (ISO/IEC 25064:2013)
  • Individual or organization who uses a software-intensive system in daily work activities or recreational pursuits (IEEE 1362-1998 (R2007))
  • The person (or persons) who operates or interacts directly with a software intensive system
  • Individual or group that benefits from a system during its utilization (ISO/IEC 15288:2008) (ISO/IEC 15939:2007)
  • Any person or thing that communicates or interacts with the software at any time (ISO/IEC 19761:2011) (ISO/IEC 20926:2009) (ISO/IEC 14143-1:2007)
  • Person (or instance) who uses the functions of a CBSS via a terminal (or an equivalent machine-user-interface) by submitting tasks and receiving the computed results (ISO/IEC 14756:1999)
  • Person who derives engineering value through interaction with a CASE tool (IEEE 1175.2-2006)
  • Individual or group that interacts with a system or benefits from a system during its utilization (ISO/IEC 25010:2011)
  • Individual or group that benefits from a ready to use software product during its utilization (ISO/IEC 25051:2014)
  • Person who performs one or more tasks with software; a member of a specific audience (ISO/IEC 26514:2008) Note: The user may perform other roles such as acquirer or maintainer. The role of user and the role of operator may be vested, simultaneously or sequentially, in the same individual or organization. [ISO 25063:2014] A person who uses the output or service provided by a system. For example, a bank customer who visits a branch, receives a paper statement, or carries out telephone banking using a call centre can be considered a user. See also: developer, end user, functional user, indirect user, operator, secondary user

User error protection

  • Degree to which a system protects users against making errors (ISO/IEC 25010:2011)

User-friendliness

  • The ease with which end-users use the system. (TMap Next)

User interface aesthetics

  • Degree to which a user interface enables pleasing and satisfying interaction for the user (ISO/IEC 25010:2011) Note: refers to properties of the product or system that increase the pleasure and satisfaction of the user, such as the use of color and the nature of the graphical design

Utility

  • The usefulness of information to its intended consumers, including the public. (OMB 515) (IAIDQ – Larry P. English)

Validity

  • A characteristic of information quality measuring the degree to which the data conforms to defined business rules. Validity is not synonymous with accuracy, which means the values are the correct values. A value may be a valid value, but still be incorrect. For example, a customer date of first service can be a valid date (within the correct range) and yet not be an accurate date. (IAIDQ – Larry P. English)

Original copyright messages included that are applicable to all definitions with a ISO/IEC 25010:yyyy reference:

IEEE ComputerSociety 12 Software and Systems Engineering Vocabulary

This definition is copyrighted , 2012 by the IEEE. The reader is granted permission to copy the definition as long as the statement
” Copyright, 2012, IEEE. Used by permission.” remains with the definition. All other rights are reserved. Copyright 2012 ISO/IEC.

In accordance with ISO/IEC JTC 1/SC 7 N2882 and N2930, this definition is made publicly available. Permission is granted to copy the definition providing that its source is cited.

Material reprinted with permission from Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK) Guide – Fourth Edition, 2008. Copyright and all rights reserved. PMI is a service and trademark of the Project Management Institute, Inc. which is registered in the United States and other nations. PMBOK is a trademark of the Project Management Institute, Inc. which is registered in the United States and other nations.

YAGNI

Context

At the time of this blog post my family and me are on holiday in Iceland. Since we are not that often in Iceland we, amongst visiting relatives and friends, use the time to look into administrative and regulatory stuff that is easier to do in Iceland than from abroad.

Syslumen

One of the things necessary is to renew my wifes passport. For which you actually need to physically go to the Civil Registry, or in icelandic ´Syslumen´. The process of renewal is (boringly) straightforward. At the office you get a number, wait, identify yourself and pay for your renewal, get a form, wait, identify yourself again, hand over the form and update your data (including new digital photo and fingerprint), sign and wait for a couple of weeks to pick up your passport at the Civil Registry.

Except

Since we´re only on holiday in Iceland a couple of weeks of waiting is not a real option. So to amend this my wife investigated and proposed the solution to sent the new passport to the consulate in our country. An option, once validated by the team lead, that was acceptable to the civil clerk. And thus the proper check box was looked for and found.

Into the process

After filling in the personal details instead of the offices address the address of the consulate was needed. The page itself did not offer any listing. The help page wasn´t really helpful either as it only pointed towards a government listing at another department. After some searching the consulate in Amsterdam and its address was found and the data could be entered. So everything was entered and the OK could be clicked. Nothing happened. Looking over the page the clerk found:

Færðu inn lögboðnar reit (Please enter mandatory data) next to a field asking for Póstnúmer (Zip code) that had been left empty as it had also been empty on the government listing. So what to do? My wife and the clerks colleague suggested to google it. And so she did. The zip code was entered and again the OK was clicked. The intranet page jumped back to the entry page and everything looked okay. But the clerk rightfully noted that the usual confirmation message was not shown and checked my wifes file. To her, and my wifes surprise no data was added, meaning the whole 20 minute worth of data was absent.

The process repeated itself a few times and eventually another colleague noted that the zip code contained letters. Something not used in Iceland itself. Why not leave those out of the field and move them somewhere else, say in front of Amsterdam. Now when clicking OK the confirmation appeared and a check showed that the file now contained all the data.

YAGNI

Even with only a couple of thousand Icelanders living abroad chances that they live in one of the eight countries (e.g. Canada, Great-Britain) using alpha numeric characters is realistic especially since many more countries use the country abbreviation in front of their zip code. So when my wife returned to tell about her plight she commented: Clearly neither the developer nor tester thought this field was important. But it really bugged me today. Further more she noted The same software company maintained the government listing and had all the zip codes removed, leaving empty spaces in the listing. That´s even more stupid.

Clearly someone must have convinced the developers and testers “You Ain´t Gonna Need It” (YAGNI).

Seven Questions – Why do I test?

Reasons for testing

The question why something is tested has kind of a schizophrenic nature to it. Its answer is either so obvious that the question itself is ignored or it is so cumbersome that testers rather avoid to answer it. The latter is mostly the case if testers have to defend why testing is done in the first place. I cannot provide you with ready made answers for this, because the answer depends too much on the circumstances and context of your situation. What I can tell you is that it is worth while for you to figure out why your specific subject under test needs testing. If you can answer this for your specific situation, then the contextually acceptable general answer should be able to be derived from it.

What to take into consideration?

One of the more common ideas on why software testing is conducted is that it’s done to find bugs. The idea is that the fact that you do or do not find more then a certain amount of bugs in time is a measurement for release readiness. Obviously such a amount of bugs doesn’t say anything about how serious the remaining bugs are nor does it say anything about the bugs you did not find. Already more than 40 years ago Edsger W. Dijkstra (1969 p.16 and 1970) discovered that software testing can show the presence of bugs, but never their absence .

Another reason to do software testing is that there is some internal or external reason to do it. A standard, law or regulation exists that either states that you have to test or whose interpretation makes management believe that you should do tests, often in a certain predefined way, to meet the rules. This is not a bad thing as an external reason for testing.

A better, more internal, reason for testing is that the software is tested to provide information. Better still Information that is meaningful with regard to the product itself, its intended use, its real use, its potential (miss)use and related to the value this has to which stakeholders.

Your Challenge

It is your challenge as a tester to find out what the information is that the stakeholders value. Then extend this with the information they should value, even if for reasons thus far unknown to them. And finally to find a way how to provide that information so that its relevance and value is delivered to them in a meaningful way.

With some well-directed extra effort the value of testing can grow. Both to the tester and to the stakeholders.

Finding the right reason

Way back in 1998, with the introduction of the Euro to the financial markets I came in contact with software testing for the first time. As a business acceptance tester I was responsible of judging whether the new programs actually had the desired functionality and if we could work with them. Especially that latter part had the focus of my attention. Being one of the users myself and being involved in the requirements design of the product I found it easy to understand why this had to be tested and what value to look for.

More often than not software testers are not so familiar with the everyday practical needs and demands of the product they are working on. In this case I have two approaches that I prefer to use and that have served me well in the past. The first is to approach testing heuristically, with for instance the Heuristic Test Strategy Model,  and explore the product with helpful mnemonics like FEW HICCUPPS . The second approach is to converse and to keep on conversing with the stakeholders and ask them all they need to know.

Who else would know better what matters to them than the people who matter (to the product and/or the project) themselves.

Why do I test, summarized.

Seven Questions

This is the opening blog of a small series of posts in which I elaborate on a test approach heuristic using 7 questions that I have developed over the years.

Thinking about testing

As a tester I have seen many approaches to software testing pass me by. A few of them, like TMap (Next) and ISTQB were picked up by the Rabobank and I have had the mixed pleasure of working with them. But regardless of how different the approaches voice out to be from each other they all seem to have a number of things in common:

  • They are mostly oriented on management (of testing)
  • They focus on processes and deliverables
  • They do not teach you how to actually test something in practice
  • They hardly make any connection to software development in general
  • They are supposedly mastered after certification

I admit that both TMap and ISTQB (initially) helped to give testing a positive foothold in many organisations and have underlined that testing should get its place in software development. Even so the five elements I have described above should also show that there are fundamental flaws in how these approaches apply testing. Following them does not guarantee you to get fully involved into software development nor does it teach you how to test in practice. Usually as a compensation for these flaws testers go to boot camp like courses to teach them more practical testing skills, like determining test coverage and applying test design techniques. Even so for many testers the start-up of their professional life is focused on getting the certification and maybe some introduction to actual testing. And then….well…..for of them most it ends here and they go out to work and follow the processes and deliver a bunch of documents. If your new to software testing this will probably keep you busy for a while, but eventually you (should) start to ask yourself questions like: Is there no better way to test this? Do I really have to write these elaborate test plans / test scripts / test cases that nobody seems to really care about? Why don’t the developers agree with me on my defects? Why is my work not valued?

I have asked myself these and similar questions and over the years I have come up with a set of alternative questions whose answers guide me through a development / test cycle. These questions demand creativity, knowledge and skilled experience to answer them. And any answer you can come up with this time will differ the next time you ask yourself that same question again.

The seven questions I use are:

I have done a talk on these seven questions at EuroSTAR 2012 and will do the at Belgium Testing Days 2013. Contact me there if you want to meet and talk, or sent me a tweet @arborosa .

Here are some twitter reactions that I got from talking at EuroSTAR 2012

250 hours of practice – January

As said in my post a couple of weeks ago, this year I would try to spend 250 hours on practicing and enhancing my testing skills. This post is a report on how I fared in January 2012. (Leaving my personal favourite untill the end…)

I started enthusiastically on January 2nd by following up on a post about the “Follow the link exercise” by Jeff Lucas. In short the exercise is to choose a blog post of your liking. You start reading it critically and then follow every link mentioned in the post. You then pursue this with every post that you read in a one hour session.

In my session, that actually lasted two hours, among others I followed up on a link to Alan Page’s blog “Tooth of the Weasel”. This post contained an overview of posts Alan wrote in 2011 so there were enough links in there to follow-up:
My job as a Tester
What is Testing?
Test Design for Automation
Numberz Challenge
Beyond Regression Tests
R-E-S-P-E-C-T
Judgment in Testing
Lost in the weeds

Although I had heard about Alan Page I was not yet familiar with his work. It pleasantly surprised me with some useful ideas and even some advice for my personal goals for this year. Let me give you some quotes I found interesting:

“What you do or don’t define as testing may differ per context.”

Automated testing “starts the same as always. Design your test first then automate where eligible. Coded tests do not replace, but enhance human tests.”

“Do not only use automated testing for regression. Vary the data, the sequence, randomize, to find new information” data driven testing

“Are testers’ second class citizens? NO. Are they whiners? Yes; Figure out how to get and earn respect!”

My second (larger) series of practice session(s) started with watching the 2011 GTAC keynote by Alberto Savoia with the ominous title “Test is dead”. You can read more about this on the blog post I wrote “Is testing dead?”

My third endeavour entailed reading the hardcopy of the book “Essential Software Testdesign” by Torbjörn Ryber. The E-book is free to download, but I liked the content enough to want to own it. Some warning is in order however. Even the hardcopy has a somewhat annoying number of typos, illogical sentences and even faults. Nevertheless the concepts Ryber discusses are helpful for many a tester.

Early in January the DEWT’s met up again. This time to discuss and prepare the TestNet event about context-driven testing. On January 18, some 150 testers visited the event to watch James M. Bach and Michael Bolton do a one hour introduction on context-driven testing using Go to meet  (which btw. worked brilliantly). After the break the DEWT Zeger van Hese, Ruud Cox, Ray Oei and myself gave a number of lightning talks followed by Q&A. Themes of the talks were “On being context-driven”; “Spin-Off”; “Context-Driven expert”; “Test Plan”.

All in all these activities got me some 20 hours of practice bringing me well en route for the 250 hours of testing practice. But to be honest I am even more of a test nerd. I have spent another 10-15 hours on following Twitter feeds with a peak while participating in a #Testchat lead by Lisa Crispin asking the following questions:
Q1: Have you worked on a “test automation project” that succeeded? What helped it succeed
Q2: What do you think upper management should know about testing? (not limited to automation)
Q3: related some to Q2: How do you keep your testing transparent to others on your team and in the organization?
Q4: Are testers on your team treated with the same respect as programmers?
Q5: sometimes the tester is undone by the process. Documentation outdated leading to looking like lack of knowledge

The last practice activity however was, for me personally, the most engaging, emotional and gratifying one.

In December I contacted Markus Gärtner to ask him for a challenge to see if I was worthy enough to enter the realms of the Miago-Do software school of testing. This actually the first step of the challenge having found a member. Markus offered my “The light saber” challenge. Several times during the challenge I would sent Markus my test investigation results and as many times Markus answered. I used several heuristic approaches, tried to inform the customer based on his needs and eventually offered a solution using personas. Somewhat to my despair Markus’s answers were getting shorter and repetitive and I asked Markus to debrief me.

We organized a one hour Skype session and went on with the challenge discussing results, progress en feelings during the challenge. Eventually we came to the point where Markus would reveal if I was allowed to enter Miagi-Do. The result got me stunned, silent and humbled for a moment… Not only was I a new member I was one of the members to, fully endorsed by other instructors, become a Black-belt.

I can only say again. Thanks guys, I am honoured.

What do you put in your test plan?

Theme event

This evening, January 18, I will be on stage during the TestNet theme event about Context-Driven Testing. The evening will start with a duo presentation by James Bach and Michael Bolton followed by a series of short presentations, lightning talks and discussion. In one of those lightning talks I will share a personal experience report. It describes how I changed the way I make test plans. Since most of you will not be able to be there (or might have never heard about TestNet*) I am sharing my experience with you in this post.

Twitter

Even if a lightning talk last only for five minutes it still requires some preparation. So to extra prepare myself I placed my experience into perspective and placed the following message on Twitter:

” @Arborosa: Question to my tweeps: What items do you put in a test plan?  I’ll put the results on my Blog. (please retweet)”

This resulted in the following responses:

Rob van Steenbergen (@rvansteenbergen)
Scope of testproject, context of product (with mindmap), product risks and qlty attributes and risk approach, planning, who tests, stakeholders, testing tools, explanations abt testing for orgs that are still learning. TP is also promotion material for testing.

Stephan Kämper (@S_2K)
Well, what to put in a plan? A (current) goal of what you’re planning. The major way you’ll follow to reach said goal. A ‘Plan B’. (Known) risks – What’s the risk of following the plan? …the risk of *not* following it? Tools & Techniques? Not sure about these.

Nitin Hatekar (@nhatekar)
Entry and exit criteria for each test phase and specific test approach for each phase. Scope of testing and the estimates for completion of in-scope test efforts. A section for assumptions, risks & blockers as well.

Rik Marselis (@rikmarselis)
For your testplans take IEEE829 (1998) as a starting point. And see tmap.net for templates ;-) (And after a reprimand by Huib Schoots to be more serious) Don’t start with making the testplan. First make the outline of the testreport. That’s your deliverable! The testreport outline must be discussed with stakeholders. Then you have startingpoint for your testplan.
Jesper L. Ottosen (@jlottoosen)
Generally answers and descriptions to “how” – to the level required of the context. ie #itdepends ;-)
Jan Jaap Cannegieter (@jjcannegieter)
Write in your testplan the info your stakeholders need. So ask your stakeholders what kind of info they need. Write it for them!
Generally I see two trains of thought here. On the one side there is the idea of having more or less fixed items in a test plan. Things like scope, approach and (product) risks. On the other side the idea to not start with fixed items or a template, but to ask the stakeholders what information they need to have in the test plan. As you will see this kind of follows the change I made.
From old to new
Historically my organization has approached software testing by following a standardized test approach based on TMap. Similarly test planning is, or rather was, based on an extended TMap style “Master Test Plan” template. The raw template itself counts 24 pages when empty, but includes some examples and explanation. The idea is to fully fill in all items in the template, see list below, and get it signed off by the principal stakeholders.
In short the template was as follows (ChaptersParagraphs and Sub-paragraphs):
 
Colophon Strategy
Management summary Test levels
Goals Entry – exit criteria
Preconditions Test objects
Budget and milestones Scope
Assignment Dependencies
Introduction Project risks
Assignment Communication & Procedures
Client Reporting
Assignee Meetings
Scope of the assignment Procedures
Test basis Test product / Deliverables
Objective Project documentation
Preconditions Testware
Starting points Test Infrastructure
Release from assignment Workplace
Test strategy Test environment
Product Risk Analysis Budget, planning & organization
Test goals Budget
Compenent per characteristic Planning
Test goal vs component matrix Team composition

I have to admit that all items in themselves are in some way relevant to testing software. But one can argue the usefulness of some of these items and more so of having these items together in one document.

The latter is best illustrated by a remark my mentor made when after three months, of being a professional tester, I was writing my first Master Test Plan. He said: “Don’t waste time. Take one of my plans. Ctrl-H the project name, change the stakeholders and check if there is mention of specifics not relevant to your project and change them. All else you can leave the same. So, even if I resisted the idea, like my colleagues I learned to do the drill; fill the template in a copy and paste style. Only occasionally I had a stakeholder question what was in it and disturb from actual testing.

Some five years ago I changed departments and found myself in a place in which I not only was free to use only those elements that I felt were useful, but I could start changing the template and the use of it entirely. But there was resistance both from the testers and the stakeholders. The testers, I think, because some of them now had to think and communicate more and the stakeholders because this broke with the standard process and they too would have to get involved and think more. To break the deadlock I started with an experiment. I filled in the template not only complete but to the letter of the “law”. I ended up with a 36 page document which I immediately send out to all stakeholders with an invitation to meet next week, meantime thoroughly check it and be ready to sign off the document during the meeting.

At the meeting the stakeholders were sitting silently, sighing at the thought of having to go through all the 36 pages. I didn’t do that. Instead I asked how many of them had read the document. With 6 out of 8 I was actually impressed. I then asked  how many of them had reviewed it. Still 4 out of the 6. I then asked who found the document a pleasure to read, who fully understood its content and thought it was of value to the project. As I hoped for all the attendees broke out in commenting the document, its length, its irrelevance, the difficulty of the content etc….

I decided to then pull out the rabbit and said “I agree with you all. I too think it’s basically of no use. There is no point in reviewing it. But we still need to write a test plan. So why don’t you tell me what you actually do want to know about testing your product.”

We spend an hour or so discussion what they wanted to know about testing. Agreed that since we are a financial institution we still have to follow certain rules, regulations and guidelines and that I would deliver a new document the same week.

I ended up writing a document that was still 24 pages long. But now it not only adhered to the documentation standards but of those 24 pages 11 were purely related to testing as way to mitigate risks and provide information about the product for this project and another 4 on testing and test heuristics in general. The original document had no  explanation and only 8 pages related to actual testing.

Conclusion

Approach writing a test plan as you would approach any test activity. Figure out what information your stakeholders need, if there are other things to consider like rules, regulations or standards. Use your personal experience and other references you think useful and then write a plan that suits your model of the context and verify and confirm it with your stakeholders.