Test Types – A

This sixth post in the series of software testing overviews introduces the first of some 80+ different test types. That number is completely arbitrary. While investigating test types I found over a hundred definitions but I have chosen to leave out a number of those ‘test types’. My choice to do so is based on the interpretation that these test types rather described a test level or a test technique and I could not see how to make a useful test type out of them.

So what is a test type?

To me a test type is a particular approach of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

While going through my overview you might find that some of the test types I mention do not fit the above description of a test type. The reason that they are mentioned despite of this is that I felt that they were so often mentioned as a test type that they should have a mention in this post just for that reason.

This post differs somewhat from the earlier posts as the definitions used are mostly my own and for some of them I have added comments as additional information. Also since a post with over 80 descriptions would be too long I will split up the overview into alphabetical sections. To begin with the A.

A/B testing
A/B testing originates from marketing research used to investigate the more efficient of two possible solutions by presenting them to two different groups and measuring the ‘profits’. In software testing it is mostly used to compare two versions of a program or website, often of which only one contains changes on a single or a few controllable criteria.

Acceptance testing
During acceptance testing requirements, variables, parts or behaviour of a program are compared against measurable aspects of predetermined acceptance criteria of those requirements variables, parts or behaviour of the program. This requires at least four things. First identification of the requirements, variables and parts or behaviour (coverage). Second expressing these in measurable aspects. Third the aspects need to represent the defining elements of the acceptance criteria. Finally the acceptance criteria themselves should represent the needs and wants of the stakeholder.

Active testing
Testing the program by triggering actions and events in the program and studying the results. To be honest I do not consider this a test type as in my opinion this describes nearly all types of testing.

Ad-hoc testing
Ad-hoc testing is software testing performed without prior planning or documentation on direction of the test, on how to test or on the oracles used. Some definitions see this as informal, unstructured and not reproducible. Informal and unstructured (if seen as unprepared) is certainly true as this would be the point of doing it ad-hoc. The not reproducible part depends on whether you care to record the test progress and test results. Something that is in my opinion is not inherently attached to doing something ad-hoc. 

Age testing
It is a testing technique that evaluates a system’s ability to perform in the future and usually carried out by test teams. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing. To be honest I found only one reference to this test type but I find the idea interesting. 

Agile testing
Agile testing is mentioned often as a test type or test approach but I have added no definition or description here. In my opinion agile testing is not a software test type. Agile testing rather is a particular context in which testing is performed that may have its particular challenges on test execution, on how tests are approached and on choices of test tooling but not a specific test type. 

Alpha testing
Alpha testing is an in-house (full) integration test of the near complete product that is executed by others than the development team but still is executed in a development environment. Alpha testing simulates the products intended use and helps catch design flaws and operational bugs. One could argue that this is more of a test level than a test type. I care to view it as test type because it is more about the type of use and its potential to discover new information than that is part of the software development itself. I specifically disagree with the idea that this is an intermediary step towards, or is part of, handing over software to a Test/QA group. In my opinion testing is integrated right from the start of development up until it stops because the product ends its lifecycle or due to some other stopping heuristic. 

API testing
API testing involves testing individual or combined inputs, outputs and business rules of the API under investigation. Essentially an API is a device-independent, or component-independent access provider that receives, interprets/transforms and sends messages so that different parts of a computer or programs can use each other’s operations and/or information. Testing an API is similar to testing in general albeit that an API has a smaller scope has, or should have, specific contracts and definitions that describe the API’s specific variables, value ranges and (business) rules. Testing an API should however not be limited to the API alone. Sources, destinations (end-points), web services (e.g. REST, SOAP), message types (e.g. JSON, XML), message formats (e.g. SWIFT, FIX, EDI, CSV), transport- (e.g. HTTP(S), JMS, MQ)  and communication protocols (e.g. TCP/IP, SMTP, MQTT, TIBCO Rendezvous) all influence the overall possibilities and functionality of the API in relation to the system(s) that use(s) the API. Typically API testing is semi- or fully automated and requires sufficient tool, message type, and transport- and communication protocol knowledge to be executed well.

Regression Testing

As a follow up in the testing definition series it was my intention to continue with covering Test Types. Initial investigation showed what I had already feared. Such a post would become a Herculean task and probable my longest post ever. So I will continue with that particular endeavor sometime later tackling it one step at a time. This post for starters covers one of the most common but also one of the most peculiar types of testing

“Regression Testing”

Regression Testing is so common as a testing type that the majority of books about software testing, and agile for that matter, that I know, mention regression testing. Almost as common however is that most of them either or both do not tell what regression testing is or do not tell how one should actually go about and do regression testing. To be fair an exception to the latter is that quite a few, particularly the ones with an agile demeanor, tell that regression testing is done by having automated tests but that is hardly anymore informative is it.

Before I go further into regression testing as being peculiar first inline with the previous posts a list of regression testing definitions:

  • Checking that what has been corrected still works. (Bertrand Meyer; Seven Principles of Software Testing 2008)
  • Regression testing involves reuse of the same tests, so you can retest (with these) after change. (Cem Kaner, James Bach, Bret Pettichord; Lessons learned in Software Testing 2002)
  • Regression testing is done to make sure that a fix does what it’s supposed to do (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)
  • Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. (John E. Bentley; Software Testing Fundamentals Concepts, Roles, and Terminology 2005)
  • Retesting to detect faults introduced by modification (ISO/IEC/IEEE 24765:2010)
  • Saving test cases and running them again after changes to other components of the program (Glenford J. Myers; The art of software testing 2nd Edition 2004)
  • Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (ISO/IEC/IEEE 24765:2010)
  • Testing following modifications to a test item or to its operational environment, to identify whether regression failures occur (ISO/IEC/IEEE 29119-1:2013)
  • Testing if what was tested before still works (Egbert Bouman; SmarTEST 2008)
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. (Standard glossary of terms used in Software Testing Version 2.2, 2012)
  • Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects (ISO/IEC 90003:2014)
  • Tests to make sure that the change didn’t disturb anything else. Test the overall integrity of the program. (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)

Looking at the above definitions the general idea about regression testing seems to be:

“To ensure that except for the parts of the areas* that were intentionally changed no other parts of these areas or other areas of the software are impacted by those changes and that these still function and behave as before”.
(*Area is used here as a general expression for function, feature, component, or any other dimensional divisions of the subject under test that is used)

The peculiar thing now is that however useful and logical such a definition is it only provides the intention of this type, or should I say activity, of testing. Regression testing could still encompass any other testing type in practice.

To know what to do you first need to establish which areas are knowingly affected by the changes and then which areas have the most likelihood of being unknowingly affected by the change. Next to that there probably are areas in your software where you do not want to take the risk of them being affected by the changes. In his presentation at EuroSTAR in 2005 Peter Zimmerer addresses the consequences of this in his test design poster by pointing out that the wider you throw out your net for regression effects the larger the effort will be:

  • Parts which have been changed – 1
  • Parts which are influenced by the change – 2
  • Risky, high priority, critical parts – 3
  • Parts which are often used – 4
  • All – 5

Once you have identified the areas you want to regression test you still need to figure out how to test those areas for the potential impact of the change. The general idea to solve this, in theory at least, seems to be to rerun previous tests that cover these areas. As this might mean running numerous tests for lengthy periods of time many books and articles propose to run automated tests. This will however only work if there are automated tests to use for testing these areas to begin with. And even if there are you still need to evaluate the results of any failed test and there is no clear indication of how long that may take.

How do you know that these existing tests do test for the impact of the change? After all they were not designed to do so. For all you know they might or might not fail due to changes to the area that is tested by them. Either result could therefore be right or wrong in light of the changes. The test itself could be influenced by an impact of the change on the test (positive or negative) that was not considered or identified yet.

All in all regression testing is easily considered to be necessary, not so easy to determine, difficult to evaluate on success and considerably more work then many people think. Even so next to writing new tests it probably is the best to solution to check if changes bring about unwanted functionality or behavior in your software. My suggestion to you is to at least change the test data so that these existing tests have a better chance of finding new bugs.

Test Levels! Really?!

Next in the series of software terminology lists is “Test Levels”. But there is something strange with test levels. Up until now almost every tester that I have worked with is familiar of the concept of software test levels. But I wonder if they are. What some call a test level, say Unit Testing, I would call a test type. However with a level like Component Testing I am not so sure. It seems only one level up from Unit Testing but now I am inclined to see it more as a test level. In my experience I am not alone in this confusion.

Sogeti’s brand TMap was one of the main contributors in establishing the concept of test levels (or at least so in the Netherlands). But since last year Sogeti acknowledges the confusion in their article “Test Levels? Test Types? Test Varieties!” and propose to rename it Test Varieties. Even ISTQB or ISO do not mention test levels (or test phases) if you like explicitly.

But test levels are a term with some historic relevance and as such they are part of my series of software testing lists. Even if nowadays I never use them anymore.

Acceptance Testing

  • Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • A formal test conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. (Cunningham & Cunningham, Inc.; http://c2.com/cgi/wiki?AcceptanceTest)
  • Acceptance testing is the process of comparing the program to its initial requirements and the current needs of its end users. (G. Meyers, The art of software testing (2nd edition) [2004])

Chain Test

  • A chain test tests the interaction of the system with the interfacing systems. (Derk-Jan de Grood; Test Goal, 2008)

Claim Testing

  • The object of a claim test is to evaluate whether a product lives up to its advertising claims. (Derk-Jan de Grood; Test Goal, 2008)

Component Testing

  • The testing of individual software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01

Function Testing

  • Function testing is a process of attempting to find discrepancies between the program and the external specification. An external specification is a precise description of the program’s behavior from the point of view of the end user. (G. Meyers, The art of software testing (2nd edition) [2004])

Functional Acceptance Test

  • The functional acceptance test is carried out by the accepter to demonstrate that the delivered system meets the required functionality. The functional acceptance test tests the functionality against the system requirements and the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • The functional acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the functional requirements. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Hardware-software Integration Testing

  • Testing performed to expose defects in the interfaces and interaction between hardware and software components. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Integration Testing

  • Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)

Module Test

  • Module tests focus on the elementary building blocks in the code. They demonstrate that the modules meet the technical design. (Derk-Jan de Grood; Test Goal, 2008)
  • Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. Module testing (or unit testing) is a process of testing the individual subprograms, subroutines, or procedures in a program. (G. Meyers, The art of software testing (2nd edition) [2004])

Module Integration Test

  • Module integration tests focus on the integration of two or more modules. (Derk-Jan de Grood; Test Goal, 2008)

Pilot

  • The pilot simulates live operations in a safe environment so that the live environment is not disrupted if the pilot fails.

Production Acceptance Test

  • The system owner uses the PAT to determine that the system is ready to go live and can go into maintenance. (Derk-Jan de Grood; Test Goal, 2008)
  • The production acceptance test is a test carried out by the future administrator(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements set by system management. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Test / System Testing

  • Testing an integrated system to verify that it meets specified requirements. (ISTQB – Standard Glossary of Terms Used in Software Testing Version 3.01)
  • The system test demonstrates that the system works according to the functional design. (Derk-Jan de Grood; Test Goal, 2008)
  • System testing is not limited to systems. If the product is a program, system testing is the process of attempting to demonstrate how the program, as a whole, does not meet its objectives. (G. Meyers, The art of software testing (2nd edition) [2004])
  • System testing, by definition, is impossible if there is no set of written, measurable objectives for the product. (G. Meyers, The art of software testing (2nd edition) [2004])
  • A system test is a test carried out by the supplier in a (manageable) laboratory environment, with the aim of demonstrating that the developed system, or parts of it, meet with the functional and non-functional specifications and the technical design. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

System Integration Test

  • A system integration test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that (sub)system interface agreements have been met, correctly interpreted and correctly implemented. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006) 

Unit Test

  • A unit test is a test carried out in the development environment by the developer, with the aim of demonstrating that a unit meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

Unit Integration Test

  • A unit integration test is a test carried out by the developer in the development environment, with the aim of demonstrating that a logical group of units meets the requirements defined in the technical specifications (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

User Acceptance Test

  • The user acceptance test is primarily a validation test to ensure the system is “fit for purpose”. The test checks whether the users can use the system, how usable the system is and how the system integrates with the workflow and processes. (Derk-Jan de Grood; Test Goal, 2008)
  • The user acceptance test is a test carried out by the future user(s) in an optimally simulated production environment, with the aim of demonstrating that the developed system meets the requirements of the users. (TMap NEXT; Michiel Vroon, Tim Koomen, Leo van der Aalst, Bart Broekman, 2006)

A collection of quality characteristics

Following the earlier posts listing software testing and bug definitions this post has also a, very large, listing. This time it is a list of quality characteristics. Like the earlier posts this list of common definitions reflects views on software testing. But unlike the earlier posts every item in itself is also a different way to look at your software and its use and a way to divide the way you test yourself. Therein lies a challenge for you as a reader.

Would you be able to create a test idea for each and every one of them?
Or at least for those that matter for your software?

Probably not but go ahead and try anyway
and if you really can’t come with a test idea try to think why not.
Is it not applicable to your application?
Is it not to applicable your context?
Do you not know how?

Would that mean you might miss some valuable information about the software?

A lot of them may not apply directly to your current context but it is good to browse over them and pick the ones that are useful now and revisit them to re-evaluate your choices later and pick the (different) ones that apply then.

Accessibility

  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Degree to which a product or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25010:2011)
  • Extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use (ISO/IEC 25064:201) Note: [ISO 9241-171:2008] Although “accessibility” typically addresses users who have disabilities, the concept is not limited to disability issues. The range of capabilities includes disabilities associated with age. Accessibility for people with disabilities can be specified or measured either as the extent to which a product or system can be used by users with specified disabilities to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by the presence of product properties that support accessibility [ISO 25063:2014] Context of use includes direct use or use supported by assistive technologies.
  • Usability of a product, service, environment or facility by people with the widest range of capabilities (ISO/IEC 25062:2006) (ISO/IEC 26514:2008)
  • Capable of being reached, capable of being used or seen. (IAIDQ – Martin Eppler)
  • The characteristic of being able to access data when it is required. (IAIDQ – Larry P. English)

Accountability

  • Degree to which the actions of an entity can be traced uniquely to the entity (ISO/IEC 25010:2011)

Accuracy

  • Degree of conformity of a measure to a standard or a true value. Level of precision or detail. Activation: a term that designates activities that make information more applicable and current, and its delivery and use more interactive and faster; a process that increases the usefulness of information by making it more vivid and organising it in a way that it can be used directly without further repackaging. (IAIDQ – Martin Eppler)
  • The capability of the software product to provide the right or agreed results or effects with the needed degree of precision (ISTQB Glossary 2015)

Accuracy to reality

  • A characteristic of information quality measuring the degree to which a data value (or set of data values) correctly represents the attributes of the real-world object or event. (IAIDQ – Larry P. English)

Accuracy to surrogate source

  • A measure of the degree to which data agrees with an original, acknowledged authoritative source of data about a real world object or event, such as a form, document, or unaltered electronic data received from outside the organisation. See also Accuracy. (IAIDQ – Larry P. English)

Adaptability

  • Degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments (ISO/IEC 25010:2011) Note: Adaptability includes the scalability of internal capacity, such as screen fields, tables, transaction volumes, and report formats. Adaptations include those carried out by specialized support staff, business or operational staff, or end users. If the system is to be adapted by the end user, adaptability corresponds to suitability for individualization as defined in ISO 9241-110. See also: flexibility
  • The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered (ISTQB Glossary 2015)

Analyzability

  • Degree of effectiveness and efficiency with which it is possible to assess the impact on a product or system of an intended change to one or more of its parts, or to diagnose a product for deficiencies or causes of failures, or to identify parts to be modified (ISO/IEC 25010:2011) Note: Implementation can include providing mechanisms for the product or system to analyze its own faults and provide reports before or after a failure or other event. Syn: analysability See also: modifiability
    The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified (ISTQB Glossary 2015)

Applicability

  • The characteristic of information to be directly useful for a given context, information that is organised for action. (IAIDQ – Martin Eppler)

Appropriateness recognizability

  • Degree to which users can recognize whether a product or system is appropriate for their needs (ISO/IEC 25010:2011)

Attractiveness

  • The capability of the software product to be attractive to the user (ISTQB Glossary 2015)

Authenticity

  • Degree to which the identity of a subject or resource can be proved to be the one claimed (ISO/IEC
    25010:2011)

Availability

  • Ability of a service or service component to perform its required function at an agreed instant or over an agreed period of time (ISO/IEC/IEEE 24765c:2014)
  • The degree to which a system or component is operational and accessible when required for use (ISO/IEC 25010:2011) Note: Availability is normally expressed as a ratio or percentage of the time that the service or service component is actually available for use by the customer to the agreed time that the service should be available. Availability is a combination of maturity (which reflects the frequency of failure), fault tolerance and recoverability (which reflect the length of downtime following each failure). See also: error tolerance, fault tolerance, reliability, robustness
  • A percentage measure of the reliability of a system indicating the percentage of time the system or data is accessible or usable, compared to the amount of time the system or data should be accessible or usable. (IAIDQ – Larry P. English)
  • The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. (ISTQB Glossary 2015)

Benchmark

  • Standard against which results can be measured or assessed (ISO/IEC 25010:2011)
  • Procedure, problem, or test that can be used to compare systems or components to each other or to a standard (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Reference point against which comparisons can be made (ISO/IEC 29155-1:2011)

Capability

  • Can the productperform valuable functions?
    • Completeness: all important functions wanted by end users are available.
    • Accuracy: any output or calculation in the product is correct and presented with significant digits.
    • Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
    • Interoperability: different features interact with each other in the best way.
    • Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
    • Data agnosticism: supports all possible data formats, and handles noise.
    • Extensibility: ability for customers or 3rd parties to add features or change behavior.

Capacity

  • Degree to which the maximum limits of a product or system parameter meet requirements (ISO/IEC 25010:2011) Note: Parameters can include the number of items that can be stored, the number of concurrent users, the communication bandwidth, throughput of transactions, and size of database.

Changeability

  • The capability of the software product to enable specified modifications to be implemented. (ISTQB Glossary 2015)

Charisma

  • Does the product have “it”?
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Uniqueness: the product is distinguishable and has something no one else has.
    • Satisfaction: how do you feel after using the product?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
    • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
    • Curiosity: will users get interested and try out what they can do with the product?
    • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
    • Hype: should the product use the latest and greatest technologies/ideas?
    • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
    • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
    • Directness: are (first) impressions impressive?
    • Story: are there compelling stories about the product’s inception, construction or usage? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Clarity

  • Void of obscure language or expression, ease of understanding, interpretability. (IAIDQ – Martin Eppler)

Co-existence

  • Degree to which a product can perform its required functions efficiently while
    environment and resources with other products, without detrimental impact on any other product (ISO/IEC 25010:2011) Syn: coexistence

Comfort

  • Degree to which the user is satisfied with physical comfort (ISO/IEC 25010:2011)

Compatibility

  • Degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment (ISO/IEC 25010:2011)
  • The ability of two or more systems or components to exchange information (ISO/IEC/IEEE 24765:2010)
  • The capability of a functional unit to meet the requirements of a specified interface without appreciable modification (ISO/IEC 2382-1:1993)
  • How well does the product interact with software and environments?
    • Hardware Compatibility: the product can be used with applicable configurations of hardware components.
    • Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
    • Application Compatibility: the product, and its data, works with other applications customers are likely to use.
    • Configuration Compatibility: product’s ability to blend in with configurations of the environment.
    • Backward Compatibility: can the product do everything the last version could?
    • Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
    • Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
    • Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Completeness

  • A characteristic of information quality measuring the degree to which all required data is known.
    • Fact completeness is a measure of data definition quality expressed as a percentage of the attributes about an entity type that need to be known to assure that they are defined in the model and implemented in a database. For example, “80 percent of the attributes required to be known about customers have fields in a database to store the attribute values.”
    • Value completeness is a measure of data content quality expressed as a percentage of the columns or fields of a table or file that should have values in them, in fact do so. For example, “95 percent of the columns for the customer table have a value in them.” Also referred to as Coverage.
    • Occurrence completeness is a measure of the percent of records in an information collection that it should have to represent all occurrences of the real world objects it should know. For example, does a Department of Corrections have a record for each Offender it is responsible to know about? (IQ). (IAIDQ – Larry P. English)

Complexity

  • The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. (ISTQB Glossary 2015)

Compliance

  • The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. (ISTQB Glossary 2015)

Component

  • An entity with discrete structure, such as an assembly or software module, within a system considered at a particular level of analysis (ISO/IEC 19770-2:2009)
  • One of the parts that make up a system (IEEE 1012-2012)(IEEE 829)
  • Object that encapsulates its own template, so that the template can be interrogated by interaction with the component (ISO/IEC 10746-2:2009)
  • Specific, named collection of features that can be described by an IDL component definition or a corresponding structure in an interface repository (ISO/IEC 19500-3:2012)
  • Functionally or logically distinct part of a system (ISO/IEC 19506:2012) Note: A component may be hardware or software and may be subdivided into other components. Component refers to a part of a whole, such as a component of a software product or a component of a software identification tag. The terms module, component, and unit are often used interchangeably or defined to be subelements of one another in different ways depending upon the context. The relationship of these terms is not yet standardized. A component may or may not be independently managed from the end-user or administrator’s point of view.

Comprehensiveness

  • The quality of information to cover a topic to a degree or scope that is satisfactory to the information user. (IAIDQ – Martin Eppler)

Conciseness

  • Marked by brevity of expression or statement, free from all elaboration and superfluous detail. (IAIDQ – Martin Eppler)

Concurrency

  • A characteristic of information quality measuring the degree to which the timing of equivalence of data is stored in redundant or distributed database files. The measure data concurrency may describe the minimum, maximum, and average information float time from when data is available in one data source and when it becomes available in another data source. Or it may consist of the relative percent of data from a data source that is propagated to the target within a specified time frame. (IAIDQ – Larry P. English)

Confidentiality

  • Degree to which a product or system ensures that data are accessible only to those authorized to have access (ISO/IEC 25010:2011)

Connectivity

  • The ease with which a link with a different information system or within the information system can be made and modified. (TMap Next)

Consistency

  • A measure of information quality expressed as the degree to which a set of data is equivalent in redundant or distributed databases. (IAIDQ – Larry P. English)
  • The condition of adhering together, the ability to be asserted together without contradiction. (IAIDQ – Martin Eppler)

Context completeness

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in all the specified contexts of use (ISO/IEC 25010:2011) Note: Context completeness can be specified or measured either as the degree to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, freedom from risk and satisfaction in all the intended contexts of use, or by the presence of product properties that support use in all the intended contexts of use.

Context coverage

  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in both specified contexts of use and in contexts beyond those initially explicitly identified (ISO/IEC 25010:2011) Note: Context of use is relevant to both quality in use and some product quality (sub) characteristics.

Continuity

  • The certainty that data processing will continue uninterruptedly, which means that it can be resumed within a reasonable period of time even after serious interruptions. (TMap Next)

Controlability

  • The ease with which the correctness and completeness of the information (in the course of time) can be checked. (TMap Next)

Convenience

  • The ease-of-use or seamlessness by which information is acquired. (IAIDQ – Martin Eppler)

Correctness

  • The functionality matches the specification. (McCall, 1977)
  • Conforming to an approved or conventional standard, conforming to or agreeing with fact, logic, or known truth. (IAIDQ – Martin Eppler)

Currency

  • A characteristic of information quality measuring the degree to which data represents reality from the required point in time. For example, one information view may require data currency to be the most up-to-date point, such as stock prices for stock trades, while another may require data to be the last stock price of the day, for stock price running average. (IAIDQ – Larry P. English)
  • The quality or state of information of being up-to-date or not outdated. (IAIDQ – Martin Eppler)

Data deficiency

  • An unconformity between the view of the real-world system that can be inferred from a representing information system and the view that can be obtained by directly observing the real-world system. (IAIDQ – Martin Eppler)

Database integrity

  • The characteristic of data in a database in which the data conforms to the physical integrity constraints, such as referential integrity and primary key uniqueness, and is able to be secured and recovered in the event of an application, software, or hardware failure. Database integrity does not imply data accuracy or other information quality characteristics not able to be provided by the DBMS functions. (IAIDQ – Larry P. English)

Degradation possibilities

  • The ease with which the core of the information system can continue after a part has failed. (TMap Next)

Ease-of-use

  • The quality of an information environment to facilitate the access and manipulation of information in a way that is intuitive. (IAIDQ – Martin Eppler)

Economic risk mitigation

  • Degree to which a product or system mitigates the potential risk to financial status, efficient operation, commercial property, reputation, or other resources in the intended contexts of use (ISO/IEC 25010:2011)

Effectiveness

  • The capability of producing an intended result. (ISTQB Glossary 2015)

Efficiency

  • System resource (including cpu, disk, memory, network) usage. (McCall, 1977)
  • Optimum use of system resources during correct execution. (Boehm, 1978)
  • A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
    • Time behavior; response times for a given thru put, i.e. transaction rate.
    • Resource behavior; resources used, i.e. memory, cpu, disk and network usage. (ISO-9126)
  • The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions. (ISTQB Glossary 2015)

Entity integrity

  • The assurance that a primary key value will identify no more than one occurrence of an entity type, and that no attribute of the primary key may contain a null value. Based on this premise, the real-world entities are uniquely distinguishable from all other entities. (IAIDQ – Larry P. English)

Environmental risk mitigation

  • Degree to which a product or system mitigates the potential risk to property or the environment in the intended contexts of use (ISO/IEC 25010:2011)

External measure of software quality

  • Measure of the degree to which a software product enables the behavior of a system under specified conditions to satisfy stated and implied needs for the system (ISO/IEC 25010:2011) Note: Attributes of the behavior can be verified or validated by executing the software product during testing and operation. See also: external software quality, internal measure of software quality

Extensibility

  • The ability to dynamically augment a database (or data dictionary) schema with knowledge worker-defined data types. This includes addition of new data types and class definitions for representation and manipulation of unconventional data such as text data, audio data, image data, and data associated with artificial intelligence applications. (IAIDQ – Larry P. English)

Fault tolerance

  • The ability of a system or component to continue normal operation despite the presence of Hardware or software faults (ISO/IEC 25010:2011)
  • Pertaining to the study of errors, faults, and failures, and of methods for enabling systems to continue normal operation in the presence of faults (ISO/IEC/IEEE 24765:2010) See also: error tolerance, fail safe, fail soft, fault secure, robustness

Flexibility

  • The ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in contexts beyond those initially specified in the requirements (ISO/IEC 25010:2011 ) Note: Flexibility enables products to take account of circumstances, opportunities and individual preferences that had not been anticipated in advance. If a product is not designed for flexibility, it might not be safe to use the product in unintended contexts. Flexibility can be measured either as the extent to which a product can be used by additional types of users to achieve additional types of goals with effectiveness, efficiency, freedom from risk and satisfaction in additional types of contexts of use, or by a capability to be modified to support adaptation for new types of users, tasks and environments, and suitability for individualization. See also: adaptability, extendibility, maintainability
  • The ability to make changes required as dictated by the business. (McCall, 1977)
  • The ease of changing the software to meet revised requirements. (Boehm 1978)
  • A characteristic of information quality measuring the degree to which the information architecture or database is able to support organisational or process reengineering changes with minimal modification of the existing objects and relationships, only adding new objects and relationships. (IAIDQ – Larry P. English)
  • The degree to which the user may introduce extensions or modifications to the information system without changing the software itself. (TMap Next)

Freedom from risk

  • Degree to which a product or system mitigates the potential risk to economic status, human life, health, or the environment (ISO/IEC 25010:2011)

Functional appropriateness

  • Degree to which the functions facilitate the accomplishment of specified tasks and objectives (ISO/IEC 25010:2011) Note: Functional appropriateness corresponds to suitability for the task.

Functional completeness

  • Degree to which the set of functions covers all the specified tasks and user objectives (ISO/IEC 25010:2011)

Functional correctness

  • Degree to which a product or system provides the correct results with the needed degree of precision (ISO/IEC 25010:2011)

Functional suitability

  • Degree to which a product or system provides functions that meet stated and implied needs when used under specified conditions (ISO/IEC 25010:2011) Note: Functional Suitability is only concerned with whether the functions meet stated and implied needs, not the functional specification.

Functionality

  • A set of attributes that bear onthe existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
    • Suitability; the appropriateness (to specification) of the functions of the software.
    • Accuratness; the correctness of the functions
    • Interoperatebility; the ability of a software component to interact with other components or systems.
    • Compliance; the compliant capability of software.
    • Security; unauthorized access to the software functions. (ISO-9126)
  • Functionality
    • The value added purpose of the product. Also…
    • Connectivity – protocols (e.g. Bluetooth), or re-sync of offline clients
    • Interoperability – inter-app platform and language independence
    • Extensibility, Expandability – plugins, late binding
    • Composability – service or message oriented considerations, governance
    • Manageability – administration of fielded product
    • Licensing (FURPS+)

Health and safety risk mitigation

  • Degree to which a product or system mitigates the potential risk to people in the intended contexts of use (ISO/IEC 25010:2011)

Information quality

  • Consistently meeting all knowledge worker and end-customer expectations in all quality characteristics of the information products and services required to accomplish the enterprise mission (internal knowledge worker) or personal objectives (end customer). (IAIDQ – Larry P. English)
  • The degree to which information consistently meets the requirements and expectations of all knowledge workers who require it to perform their processes. (IAIDQ – Larry P. English)
  • The fitness for use of information; information that meets the requirements of its authors, users, and administrators. (IAIDQ – Martin Eppler)

Immunity

  • Degree to which a product or system is resistant to attack (ISO/IEC 25010:2011) See also: integrity

Indirect user

  • Person who receives output from a system, but does not interact with the system (ISO/IEC 25010:2011) See also: direct user, secondary user

Install ability

  • Degree of effectiveness and efficiency with which a product or system can be successfully installed or uninstalled in a specified environment (ISO/IEC 25010:2011)
  • The capability of the software product to be installed in a specified environment. (ISTQB Glossary 2015)

Integrity

  • Degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data (ISO/IEC 25010:2011) See also: immunity
  • Protection from unauthorized access. (McCall, 1977)

Interactivity

  • The capacity of an information system to react to the inputs of information consumers, to generate instant, tailored responses to a user’s actions or inquiries. Interpretation : the process of assigning meaning to a constructed representation of an object or event. (IAIDQ – Martin Eppler)

Internal measure of software quality

  • Measure of the degree to which a set of static attributes of a software product satisfies stated and implied needs for the software product to be used under specified conditions (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: Static attributes include those that relate to the software architecture, structure and its components. Static attributes can be verified by review, inspection, simulation, or automated tools. See also: external measure of software quality

Interoperability

  • Degree to which two or more systems, products or components can exchange information and use the information that has been exchanged (ISO/IEC 25010:2011)
  • The ability for two or more ORBs to cooperate to deliver requests to the proper object (ISO/IEC 19500-2:2012)
  • The capability to communicate, execute programs, and transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units. (ISO/IEC 2382-1:1993)
  • Capability of objects to collaborate, that is, the capability mutually to communicate information in order to exchange events, proposals, requests, results, commitments and flows (ISO/IEC 10746-2:2009) Note: Interoperability is used in place of compatibility in order to avoid possible ambiguity with replace ability. See also: compatibility
  • The extent, or ease, to which software components work together. (McCall, 1977)
  • The capability of the software product to interact with one or more specified components or systems. (ISTQB Glossary 2015)

IT-bility

  • Is the product easy to install,maintain and support?
    • System requirements: ability to run on supported configurations, and handle different environments or missing components.
    • Installability: product can be installed on intended platforms with appropriate footprint.
    • Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
    • Uninstallation: are all files (except user’s or system files) and other resources removed when uninstalling?
    • Configuration: can the installation be configured in various ways or places to support customer’s usage?
    • Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
    • Maintainability: are the product and its artifacts easy to maintain and support for customers?
    • Testability: how effectively can the deployed product be tested by the customer? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Learnability

  • Degree to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Can be specified or measured either as the extent to which a product or system can be used by specified users to achieve specified goals of learning to use the product or system with effectiveness, efficiency, freedom from risk and satisfaction in a specified context of use, or by product properties corresponding to suitability for learning as defined in ISO 9241-110.
  • The quality of information to be easily transformed into knowledge. (IAIDQ – Martin Eppler)
  • The capability of the software product to enable the user to learn its application. (ISTQB Glossary 2015)

Maintainability

  • Ease with which a software system or component can be modified to change or add capabilities, correct faults or defects, improve performance or other attributes, or adapt to a changed environment (ISO/IEC/IEEE 24765:2010)
  • Ease with which a hardware system or component can be retained in, or restored to, a state in which it can perform its required functions (ISO/IEC/IEEE 24765:2010)
  • Capability of the software product to be modified (IEEE 14764-2006)
  • Average effort required to locate and fix a software failure (ISO/IEC/IEEE 24765:2010)
  • Speed and ease with which a program can be corrected or changed (IEEE 982.1-2005)
  • Degree of effectiveness and efficiency with which a product or system can be modified by the intended maintainers (ISO/IEC 25010:2011) Note: Maintainability includes installation of updates and upgrades. Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications. Modifications include those carried out by specialized support staff, and those carried out by business or operational staff, or end users. See also: extendability, flexibility
  • Can the product be maintained and extended atlow cost?
    • Flexibility: the ability to change the product as required by customers.
    • Extensibility: will it be easy to add features in the future?
    • Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
    • Readability: the code is adequately documented and easy to read and understand.
    • Transparency: Is it easy to understand the underlying structures?
    • Modularity: the code is split into manageable pieces.
    • Refactorability: are you satisfied with the unit tests?
    • Analyzability: ability to find causes for defects or other code of interest.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to find and fix a defect (McCall, 1977)
  • The characteristic of an information environment to be manageable at reasonable costs in terms of content volume, frequency, quality, and infrastructure. If a system is maintainable, information can be added, deleted, or changed efficiently. (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed to make specifiedmodifications.
    • Analyzability; the ability to identify the root cause of a failure within the software.
    • Changeability; the sensitivity to change of a given system that is the negative impact that may be caused by system changes.
    • Testability; the effort needed to verify (test) a system change. (ISO-9126)
  • The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. (ISTQB Glossary 2015)
  • The ease of adapting the information system to new demands from the user, to changing external environments, or in order to correct defects. (TMap Next)

Manageability

  • The ease with which to get and keep the information system in its operational state. (TMap Next)

Maturity

  • Degree to which a system, product or component meets needs for reliability under normal operation (ISO/IEC 25010:2011) Note: The concept of maturity can be applied to quality characteristics to indicate the degree to which they meet required needs under normal operation.
  • The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (ISTQB Glossary 2015)
  • The capability of the software product to avoid failure as a result of defects in the software. (ISTQB Glossary 2015)

Modifiability

  • Ease with which a system can be changed without introducing defects (ISO/IEC/IEEE 24765:2010)
  • Degree to which a product or system can be effectively and efficiently modified without introducing defects or degrading existing product quality (ISO/IEC 25010:2011) see also: analyzability, maintainability, and modularity

Modularity

  • Degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components (ISO/IEC 25010:2011)
  • Software attributes that provide a structure of highly independent components (ISO/IEC/IEEE 24765:2010) See also: cohesion, coupling, and modifiability

Non-repudiation

  • Degree to which actions or events can be proven to have taken place, so that the events or actions cannot be repudiated later (ISO/IEC 25010:2011)
  • The ability to provide proof of transmission and receipt of electronic communication. (IAIDQ – Larry P. English)

Operability

  • Degree to which a product or system has attributes that make it easy to operate and control (ISO/IEC 25010:2011) Note: Operability corresponds to controllability, (operator) error tolerance, and conformity with user expectations as defined in ISO 9241-110.

Operational reliability

  • The degree to which the information system remains free from interruptions. (TMap Next)

Performance

  • Is the product fast enough?
    • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
    • Resource Utilization: appropriate usage of memory, storage and other resources.
    • Responsiveness: the speed of which an action is (perceived as) performed.
    • Availability: the system is available for use when it should be.
    • Throughput: the products ability to process many, many things.
    • Endurance: can the product handle load for a long time?
    • Feedback: is the feedback from the system on user actions appropriate?
    • Scalability: how well does the product scale up, out or down? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. (ISTQB Glossary 2015)

Performance efficiency

  • Performance relative to the amount of resources used under stated conditions
    (ISO/IEC 25010:2011) Note: Resources can include other software products, the software and hardware configuration of the system, and materials (e.g. print paper, storage media).

Pleasure

  • Degree to which a user obtains pleasure from fulfilling personal needs (ISO/IEC 25010:2011) Note: Personal needs can include needs to acquire new knowledge and skills, to communicate personal identity and to provoke pleasant memories.

Portability

  • Ease with which a system or component can be transferred from one hardware or software environment to another (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) (2) capability of a program to be executed on various types of data processing systems without converting the program to a different language and with little or no modification (ISO/IEC 2382-1:1993)
  • Degree of effectiveness and efficiency with which a system, product, or component can be transferred from one hardware, software or other operational or usage environment to another (ISO/IEC 25010:2011)
  • Property that the reference points of an object allow it to be adapted to a variety of configurations (ISO/IEC 10746-2:2009) Syn: transportability See also: machine-independent
  • Is transferring of the product to different environments enabled?
    • Reusability: can parts of the product be re-used elsewhere?
    • Adaptability: is it easy to change the product to support a different environment?
    • Compatibility: does the product comply with common interfaces or official standards?
    • Internationalization: it is easy to translate the product.
    • Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
    • User Interface-robustness: will the product look equally good when translated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to transfer the software from one environment to another. (McCall, 1977)
  • The extent to which the software will work under different computer configurations (i.e. operating systems, databases etc.). (Boehm, 1978)
  • A set of attributes that bear onthe ability of software to be transferred from one environment to another.
    • Adaptability; the ability of the system to change to new specifications or operating environments.
    • Installability; the effort required to install the software.
    • Conformance
    • Replaceability; how easy is it to exchange a given software component within a specified environment. (ISO-9126)
  • The ease with which the software product can be transferred from one hardware or software environment to another. (ISTQB Glossary 2015)
  • The diversity of the hardware and software platforms on which the information system can run, and how easy it is to transfer the system from one environment to another. (TMap Next)

Possibility of diversion

  • The ease with which (part of) the information system can continue elsewhere. (TMap Next)

Quality

  • Degree to which a system, component, or process meets specified requirements (IEEE 829-2008)
  • Ability of a product, service, system, component, or process to meet customer or user needs, expectations, or requirements (ISO/IEC/IEEE 24765:2010)
  • Degree to which the system satisfies the stated and implied needs of its various stakeholders, and thus provides value (ISO/IEC 25010:2011)
  • Degree to which a system, component, or process meets customer or user needs or expectations (IEEE 829-2008)
  • The degree to which a set of inherent characteristics fulfills requirements (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)

Quality in use (measure)

  • Extent to which a product used by specific users meets their needs to achieve specific goals with effectiveness, productivity, safety and satisfaction in specific contexts of use (ISO/IEC 25000:2014)
  • Degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use (ISO/IEC 25000:2014) (ISO/IEC 25010:2011) Note: This definition of quality in use is similar to the definition of usability in ISO 9241-11. Before the product is released, quality in use can be specified and measured in a test environment designed and used exclusively by the intended users for their goals and contexts of use, e.g. User Acceptance Testing Environment. See also: usability

Quality measure

  • Measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25010:2011)
  • Derived measure that is defined as a measurement function of two or more values of quality measure elements (ISO/IEC 25021:2012) Syn: QM See also: software quality measure

Quality measure element (QME)

  • Measure defined in terms of a property and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25000:2014) (ISO/IEC 25021:2012)
  • Measure defined in terms of an attribute and the measurement method for quantifying it, including optionally the transformation by a mathematical function (ISO/IEC 25010:2011) Note: The software quality characteristics or sub characteristics of the entity are derived afterwards by calculating a software quality measure.

Quality property

  • Measurable component of quality (ISO/IEC 25010:2011)

Recoverability

  • Degree to which, in the event of an interruption or a failure, a product or system can recover the data directly affected and re-establish the desired state of the system (ISO/IEC 25010:2011) See also: survivability
  • The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. (ISTQB Glossary 2015)
  • The ease and speed with which the information system can be restored after an interruption. (TMap Next)

Reliability

  • The ability of a system or component to perform its required functions under stated conditions for a specified period of time (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary)
  • Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time (ISO/IEC 25010:2011) Note: Dependability characteristics include availability and its inherent or external influencing factors, such as availability, reliability (including fault tolerance and ecoverability), security (including confidentiality and integrity), maintainability, durability, and maintenance support. Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation, or due to contextual changes. See also: availability, MTBF
  • Can you trust the product in many and difficult situations?
    • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
    • Robustness: the product handles foreseen and unforeseen errors gracefully.
    • Stress handling: how does the system cope when exceeding various limits?
    • Recoverability: it is possible to recover and continue using the product after a fatal error.
    • Data Integrity: all types of data remain intact throughout the product.
    • Safety: the product will not be part of damaging people or possessions.
    • Disaster Recovery: what if something really, really bad happens?
    • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy? (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The extent to which the system fails. (McCall, 1977)
  • The extent to which the software performs as required, i.e. the absence of defects. (Boehm, 1978)
  • A set of attributes that bear onthe capability of software tomaintain its level of performance under stated conditions for a statedperiod of time.
    • Maturity; the frequency of failure of the software.
    • Fault tolerance; the ability of software to withstand (and recover) from component, or environmental, failure.
    • Recoverability; ability to bring back a failed system to full operation, including data and network connections. (ISO-9126)
    • Reliability
    • Accuracy – the correctness of output
    • Availability – mean time between failures
    • Recoverability – from partial system failures
    • Verifiability – (contractual) runtime reporting on system health
    • Survivability – continuous operations through disasters (earthquake, war, etc.) (FURPS+)
  • The characteristic of an information infrastructure to store and retrieve information in an accessible, secure, maintainable, and fast manner. (IAIDQ – Martin Eppler)
  • The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. (ISTQB Glossary 2015)

Replace ability

  • Degree to which a product can replace another specified software product for the same purpose in the same environment (ISO/IEC 25010:2011) Note: Replace ability of a new version of a software product is important to the user when upgrading. Replace ability will reduce lock-in risk, so that other software products can be used in place of the present one, See also: adaptability, install ability
  • The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. (ISTQB Glossary 2015)

Resource 

  • Degree to which the amounts and types of resources used by a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011) Note: Human resources are included as part of efficiency. See also: efficiency

Reusability

  • Degree to which an asset can be used in more than one system, or in building other assets (IEEE 1517-2010)
  • In a reuse library, the characteristics of an asset that make it easy to use in different contexts, software systems, or in building different assets (IEEE 1517-2010) See also: generality
  • The ease of using existing software components in a different context. (McCall, 1977)
  • The degree to which parts of the information system, or the design, can be reused for the development of different applications. (TMap Next)

Risk

  • An uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)
  • Combination of the probability of an abnormal event or failure and the consequence(s) of that event or failure to a system’s components, operators, users, or environment. (IEEE 1012-2012)
  • Combination of the probability of an event and its consequence (ISO/IEC 16085:2006)
  • Measure that combines both the likelihood that a system hazard will cause an accident and the severity of that accident. (IEEE 1228-1994 (R2002))
  • Function of the probability of occurrence of a given threat and the potential adverse consequences of that threat’s occurrence (ISO/IEC 25010:2011)
  • Combination of the probability of occurrence and the consequences of a given future undesirable event (IEEE 1012-2012) Note: See ISO/IEC Guide 51 for issues related to safety.

Robustness

  • The degree to which the information system proceeds as usual even after an interruption. (TMap Next)

Satisfaction

  • Freedom from discomfort and positive attitudes towards the use of the product (ISO/IEC 25062:2006)
  • User’s subjective response when using the product (ISO/IEC 26513:2009)
  • Degree to which user needs are satisfied when a product or system is used in a specified context of use (ISO/IEC 25010:2011)

Scalability

  • The capability of the software product to be upgraded to accommodate increased loads. (ISTQB Glossary 2015)

Security

  • Protection of information and data so that unauthorized persons or systems cannot read or modify them and authorized persons or systems are not denied access to them (ISO/IEC 12207:2008)
  • The protection of computer hardware or software from accidental or malicious access, use, modification, destruction, or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations. (IEEE 1012-2012)
  • All aspects related to defining, achieving, and maintaining confidentiality, integrity, availability, non-repudiation, accountability, authenticity, and reliability of a system (ISO/IEC 15288:2008)
  • Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization (ISO/IEC 25010:2011) Note: Security also pertains to personnel, data, communications, and the physical protection of computer installations.
  • Does the product protect against unwanted usage?
    • Authentication: the product’s identifications of the users.
    • Authorization: the product’s handling of what an authenticated user can see and do.
    • Privacy: ability to not disclose data that is protected to unauthorized users.
    • Security holes: product should not invite to social engineering vulnerabilities.
    • Secrecy: the product should under no circumstances disclose information about the underlying systems.
    • Invulnerability: ability to withstand penetration attempts.
    • Virus-free: product will not transport virus, or appear as one.
    • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
    • Compliance: security standards the product adheres to. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Security: Confidentiality Preservation, Access Control, Non-repudiation (Integrity Verification, Authenticity Verification – PKI), Identity Verification (logon paradigm), Availability of Service, Auditing Evidence. (FURPS+)
  • Testing to determine the security of the software product. (ISTQB Glossary 2015)
  • The certainty that data can be viewed and changed only by those who are authorized to do so. (TMap Next)

Software quality

  • Capability of a software product to satisfy stated and implied needs when used under specified conditions (ISO/IEC 25000:2014)
  • Degree to which a software product satisfies stated and implied needs when used under specified conditions (ISO/IEC 25010:2011)
  • Degree to which a software product meets established requirements (IEEE 730-2014) Note: Quality depends upon the degree to which the established requirements accurately represent stakeholder needs, wants, and expectations. This definition differs from the ISO 9000:2000 quality definition mainly because the software quality definition refers to the satisfaction of stated and implied needs, while the ISO 9000 quality definition refers to the satisfaction of requirements. In SQuaRE standards software quality has the same meaning as software product quality.

Software quality requirement

  • Requirement that a software quality attribute be present in software (ISO/IEC 25010:2011)

Stability

  • The capability of the software product to avoid unexpected effects from modifications in the software. (ISTQB Glossary 2015)

Suitability

  • The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. (ISTQB Glossary 2015)

Suitability of infrastructure

  • The suitability of hardware, network, systems software and DBMS for the application concerned and the degree to which the elements of this infrastructure interrelate. (TMap Next)

Supportability

  • Supportability
    • Maintainability (i.e. “build-time” issues)
      • Testability – at unit, integration, and system levels
      • Buildability – fast build times, versioning robustness
      • Portability – minimal vendor or platform dependency
      • Reusability – of components
      • Brandability – OEM and partner support
      • Internationalization – prep for localization
    • Serviceability (i.e. “run-time” issues)
      • Continuity – administrative downtime constraints
      • Configurability/Modifiability – of fielded product
      • Installability, Updateability – ensuring application integrity
      • Deployability – mode of distributing updates
      • Restorability – from archives
      • Logging – of event or debug data (FURPS+)
  • Can customers’ usage and problems be supported?
    • Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
    • Diagnostics: is it possible to find out details regarding customer situations?
    • Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
    • Debugging: can you observe the internal states of the software when needed?
    • Versatility: ability to use the product in more ways than it was originally designed for. (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)

Survivability

  • Degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks (ISO/IEC 25010:2011) See also: recoverability

Testability

  • Extent to which an objective and feasible test can be designed to determine whether a requirement is met (ISO/IEC 12207:2008)
  • Degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met (IEEE 1233-1998 (R2002))
  • Degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met (ISO/IEC/IEEE 24765:2010)
  • Degree of effectiveness and efficiency with which test criteria can be established for a system, product, or component and tests can be performed to determine whether those criteria have been met (ISO/IEC 25010:2011)
  • Is it easy to check and test the product?
    • Traceability: the product logs actions at appropriate levels and in usable format.
    • Controllability: ability to independently set states, objects or variables.
    • Observability: ability to observe things that should be tested.
    • Monitorability: can the product give hints on what/how it is doing?
    • Isolateability: ability to test a part by itself.
    • Stability: changes to the software are controlled, and not too frequent.
    • Automation: are there public or hidden programmatic interface that can be used?
    • Information: ability for testers to learn what needs to be learned…
    • Auditability: can the product and its creation be validated?
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • The ability to Validate the software requirements. (McCall, 1977)
  • Ease of validation, that the software meets the requirements. (Boehm, 1978)
  • The ease with which the functionality and performance level of the system (after each modification) can be tested and how fast this can be done. (TMap Next)

Time behavior

  • Degree to which the response and processing times and throughput rates of a product or system, when performing its functions, meet requirements (ISO/IEC 25010:2011)

Timeliness

  • A characteristic of information quality measuring the degree to which data is available when knowledge workers or processes require it. (IAIDQ – Larry P. English)
  • Coming early or at the right, appropriate or adapted to the times or the occasion. (IAIDQ – Martin Eppler)

Traceability

  • The ability to identify related items in documentation and software, such as requirements with associated tests. (ISTQB Glossary 2015)

Trust

  • Degree to which a user or other stakeholder has confidence that a product or system will behave as intended (ISO/IEC 25010:2011)

Understandability

  • The extent to which the software is easily comprehended with regard to purpose and structure. (Boehm, 1978)
  • The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. (ISTQB Glossary 2015)

Usability

  • Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25064:2013)
  • Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use (ISO/IEC 25010:2011) Note: Usability can either be specified or measured as a product quality characteristic in terms of its sub characteristics, or specified or measured directly by measures that are a subset of quality in use. See also: reusability
  • Usability. Is the product easy to use?
    • Affordance: product invites to discover possibilities of the product.
    • Intuitiveness: it is easy to understand and explain what the product can do.
    • Minimalism: there is nothing redundant about the product’s content or appearance.
    • Learnability: it is fast and easy to learn how to use the product.
    • Memorability: once you have learnt how to do something you don’t forget it.
    • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
    • Operability: an experienced user can perform common actions very fast.
    • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
    • Control: the user should feel in control over the proceedings of the software.
    • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
    • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
    • Consistency: behavior is the same throughout the product, and there is one look & feel.
    • Tailorability: default settings and behavior can be specified for flexibility.
    • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
    • Documentation: there is a Help that helps, and matches the functionality.
      (Rikard Edgren, Henrik Emilsson and Martin Jansson – thetesteye.com v1.1)
  • Ease of use. (McCall, 1977) (Boehm, 1978)
  • Usability
    • Ergonomics – human factors engineering
    • Look and Feel – along with branding instancing
    • Accessibility – special needs accommodation
    • Localization – adding language resources
    • Documentation (FURPS+)
  • The characteristic of an information environment to be user-friendly in all its aspects (easy to learn, use, and remember). (IAIDQ – Martin Eppler)
  • A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
    • Understandability; the ease of which the systems functions can be understood
    • Learnability; learning effort for different users, i.e. novice, expert, casual etc.
    • Operability; ability of the software to be easily operated by a given user in a given environment.
    • Attractiveness; (ISO-9126)
  • The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. (ISTQB Glossary 2015)

Usefulness

  • Degree to which a user is satisfied with perceived achievement of pragmatic goals, including the results of use and the consequences of use (ISO/IEC 25010:2011)
  • The quality of having utility and especially practical worth or applicability. (IAIDQ – Martin Eppler)

User

  • Individual or organization that uses the system or software to perform a specific function (ISO/IEC 25000:2014)
  • Person who interacts with a system, product or service (ISO/IEC 25064:2013)
  • Individual or organization who uses a software-intensive system in daily work activities or recreational pursuits (IEEE 1362-1998 (R2007))
  • The person (or persons) who operates or interacts directly with a software intensive system
  • Individual or group that benefits from a system during its utilization (ISO/IEC 15288:2008) (ISO/IEC 15939:2007)
  • Any person or thing that communicates or interacts with the software at any time (ISO/IEC 19761:2011) (ISO/IEC 20926:2009) (ISO/IEC 14143-1:2007)
  • Person (or instance) who uses the functions of a CBSS via a terminal (or an equivalent machine-user-interface) by submitting tasks and receiving the computed results (ISO/IEC 14756:1999)
  • Person who derives engineering value through interaction with a CASE tool (IEEE 1175.2-2006)
  • Individual or group that interacts with a system or benefits from a system during its utilization (ISO/IEC 25010:2011)
  • Individual or group that benefits from a ready to use software product during its utilization (ISO/IEC 25051:2014)
  • Person who performs one or more tasks with software; a member of a specific audience (ISO/IEC 26514:2008) Note: The user may perform other roles such as acquirer or maintainer. The role of user and the role of operator may be vested, simultaneously or sequentially, in the same individual or organization. [ISO 25063:2014] A person who uses the output or service provided by a system. For example, a bank customer who visits a branch, receives a paper statement, or carries out telephone banking using a call centre can be considered a user. See also: developer, end user, functional user, indirect user, operator, secondary user

User error protection

  • Degree to which a system protects users against making errors (ISO/IEC 25010:2011)

User-friendliness

  • The ease with which end-users use the system. (TMap Next)

User interface aesthetics

  • Degree to which a user interface enables pleasing and satisfying interaction for the user (ISO/IEC 25010:2011) Note: refers to properties of the product or system that increase the pleasure and satisfaction of the user, such as the use of color and the nature of the graphical design

Utility

  • The usefulness of information to its intended consumers, including the public. (OMB 515) (IAIDQ – Larry P. English)

Validity

  • A characteristic of information quality measuring the degree to which the data conforms to defined business rules. Validity is not synonymous with accuracy, which means the values are the correct values. A value may be a valid value, but still be incorrect. For example, a customer date of first service can be a valid date (within the correct range) and yet not be an accurate date. (IAIDQ – Larry P. English)

Original copyright messages included that are applicable to all definitions with a ISO/IEC 25010:yyyy reference:

IEEE ComputerSociety 12 Software and Systems Engineering Vocabulary

This definition is copyrighted , 2012 by the IEEE. The reader is granted permission to copy the definition as long as the statement
” Copyright, 2012, IEEE. Used by permission.” remains with the definition. All other rights are reserved. Copyright 2012 ISO/IEC.

In accordance with ISO/IEC JTC 1/SC 7 N2882 and N2930, this definition is made publicly available. Permission is granted to copy the definition providing that its source is cited.

Material reprinted with permission from Project Management Institute, A Guide to the Project Management Body of Knowledge (PMBOK) Guide – Fourth Edition, 2008. Copyright and all rights reserved. PMI is a service and trademark of the Project Management Institute, Inc. which is registered in the United States and other nations. PMBOK is a trademark of the Project Management Institute, Inc. which is registered in the United States and other nations.

Seven questions – What questions do I have?

The previous two questions helped you to find why testing is necessary, what information you need to answer the first question (business value) and which test ideas help you deliver meaningful and relevant information. This post now extends this to areas that help you identify the circumstances in which you will have to do your work. It ends with a little advice that you should not take things for granted especially if you do not understand them.

DID-A-TEST

Originally called Jean-Paul’s test this mnemonic represents a set of surveying questions that helps you identify working conditions. Once you have the answers to these questions you should check if and if so how this influences your ability to test and the ability to give more or less rich information to your stakeholders. You can use these questions to identify  boundaries and constraints to your testing possibilities and address them or at least be and make others aware of them. These questions are by no means exhaustive, but in my opinion they form a good starting point in exploring your test context.

Are the Developers available?

Developers are physically close of far from you. They are more or less available in time or more or less organizationally accessible to testers. The ability or inability to work together with development can influence your risk assessments, your insight into risk areas, your knowledge about development solutions and what is or is not covered by development testing activities. Additionally when addressing developers it is good to know the preferences and willingness of each developer with regard to working with testers.

How soon do you have access to Information?

Of course you can use the FEW HICCUPPS mnemonic (James Bach, Michael Bolton) to improve and expand your test ideas, but gathering information about the intended product or solution is a main starting point and important reference to work with. So getting access to the sources of information or even better being involved in the information gathering should start as soon as possible.

Do you control the test Data?

My interpretation of test data here is wide in the sense that I do not only mean the ability to enter different types of inputs, in different variations and quantities. I also mean the ability to set up and load data sets creating test scenarios. And the ability to set or remove states in the software. Being able to control the data is beneficial in speeding up test execution, creating typical test situations and helps to quickly repeat the test case if necessary.

Having control of the test data is only one side of the story. The other side of the story is that you need to find the right ‘Trigger Data‘ to use. Trigger Data is any data item, set of data or data state specifically created and used to invoke, enable or execute your test case (scenario).

Are the Analysts available?

Like the developers the availability, both physical and in time, of the (business) analysts has an impact on the way you can interact with them. And like the developers analysts will have preferences and are more or less willing to work with testers. The impact of this might however be larger as analysts are often the first source of information about the products intended functionality and its means of satisfying the stakeholders needs and wants. They are often also a sort of gate(keepers) in communicating to business stakeholders. In that sense they can make a testers live more or less easy. Especially if testers are not expected to go outside of the projects boundaries.

Are the (other) Testers available?

In my experience working as the only tester on a project has an impact both on the way you work and to some extend to the quality of your work. Being able to pair, share thoughts or just have a chat with another tester can help you reconsider your work and develop new or different test ideas. The tester doesn’t necessarily have to be in your team to have this effect. Having other testers in your team brings both the benefit (and sometimes burden) of being able to divide work, get fast feedback on test ideas or test results and the possibility to focus or divert away from your strengths and weaknesses as a tester.

Do you have a quiet work Environment?

This question addresses two different aspects. The first aspect is the infrastructure. Do you know what it’s components are? Do you have a separated test environment? And if so are you its only user? Do you know how to get access to it? Are you allowed to change it yourself or do you need others to do it for your? Is your test environment similar to the real production environment?

Secondly it addresses the circumstances of your workplace. Do you work in isolation, in cubicles, or in a large office garden? Is your work uninterrupted or are you (in)voluntarily involved into other work processes and activities? Does that influence your performance and well-being? What the influences are obviously depends on you as person and the real circumstances. But it is wise to take note and consider possible consequences. There are many studies into this field. Here are few articles that might trigger your interest: “Designing the work environment for worker health and productivity” by Jacqueline C. Vischer;  “Interrupt Mood” by Brian Tarbox or “Where does all that time go” by Michael Bolton.

Are the Stakeholders (that matter) available?

Stakeholders come in many forms and shapes, but they have one thing in common. They are in someway involved in the creation and/or use of the software solution. That not only means they need to be informed about the product that also means that they have expectations and opinions about the product itself, what it is used for, and what the products needs to able to do to make it valuable to them. As a tester you should identify these expectations and opinions and tailor your information about the product so that it is meaningful to them.

In theory the effort you put into gathering, tailoring and presenting that information is based on how much the stakeholders matters to the product, the project and to some extend to you the tester. I say in theory because to do so in practice the stakeholders need to be available and accessible. If they are not or if it is difficult you should take the extra time and effort into account of your testing and test reporting.

Is there (mandatory) Tooling?

There are many types of tools available in the market to capture requirements, store test cases, log test execution or manage bugs. And likewise there are many tools available to use during testing. As a tester you need to find out which tools there are, which tools you are allowed to use, and which tools are mandatory to use. You will might not know all the tools you are faced with or are unable to use a tool that you already know and like. In that case you will have to get used to the ‘new’ tooling and learn to use it. Additionally many tools have inbuilt workflows and processes that take away time from actual testing. As a tester you should be aware of this and take this into account when testing.

Poutsma principle

Whenever I start on a new test assignment or pick up a new work item I need to search and find its purpose, its meaning and I need to understand how the chosen requirements offer a solution to the problem that is solved. Sometimes that is really easy.

Say you visit the 36th International Carrot Conference before going to CAST 2013.  You come home and decide to sell carrots for hungry rabbits online and you want to vary the amount of carrots or differentiate the type of carrot for different breeds of rabbits. You will need something like drop down list or input field to identify the different rabbit breeds.  And except for the sudden urge to sell carrots this is fairly easy to understand and test.

If however you are asked to test the software implementation of calculating results for a new Credit Risk Model used by an international bank you will have a lot more to understand. If so I remind myself of the Poutsma Principle:

If something is too complex to understand, it must be wrong.

I use this principle to remind myself to keep asking questions until I either understand it or except the argumentation of it as proof. In either case it helps me to break down requirements to a level that makes me confident enough to start testing and daring enough  so that I can also use my personal addition to the principle

And it is your job (as a tester) to proof it wrong.

If you want to know more about the Poutsma Principle you can follow this link.

No user would do that

Still on Iceland

Being in a foreign country gives you a chance to visit shops that you haven´t been before. And doing so has heightened my attention to curious software behavior. Today we went out for some groceries at the Bonus supermarket. Untill we got to the check out nothing exciting happened. While waiting in line I noted some commotion by a customer as he argued with the cashier. Even if my Icelandic is not that well I could make out that the man had bought groceries for 30.213 ISK (approx. 200 USD) and had tried to pay with his credit card. Unfortunately for him the cash register signalled that his credit was insufficient to match the amount to pay. He however disagreed and demanded that the cashier tried again.

This system had been tested

Probably against her better knowledge she tried again so she could convince the client. As she tried again I noted a change in the layout of the cash registers touch screen. I couldn´t help myself and tried to see what  had changed. I noted that a red button had moved to the side of the screen (later I looked up its meaning and it said ‘Cancel Payment’) and a new grey button had appeared on the screen. The text on the button caught my eye as it was not in Icelandic but in English and said:

*TEST* Use another card *TEST*

To my surprise and probable to hers aswell she pushed where the red button had been and hit the new button. As far as I could tell the screen seemed to have returned to its former state. However the cashier caught the difference. The line displaying the  amount paid now had a value saying 15.107 ISK with the line below it saying the amount to pay was 15.106 ISK. The cash register had accepted half of the amount to pay from the previously overdrawn credit card, but still had an open amount. The cashier was puzzled. The customer less so and readily offered his girlfriends credit card to pay the rest. To no avail. Nothing happened, the cash registers screen locked and the cashier, her colleague and eventually her manager could not unlock the screen let alone solve the problem. The check out line closed, we paid at another line, and as we were leaving the shop I could just hear the manager calling a service desk…

YAGNI

Context

At the time of this blog post my family and me are on holiday in Iceland. Since we are not that often in Iceland we, amongst visiting relatives and friends, use the time to look into administrative and regulatory stuff that is easier to do in Iceland than from abroad.

Syslumen

One of the things necessary is to renew my wifes passport. For which you actually need to physically go to the Civil Registry, or in icelandic ´Syslumen´. The process of renewal is (boringly) straightforward. At the office you get a number, wait, identify yourself and pay for your renewal, get a form, wait, identify yourself again, hand over the form and update your data (including new digital photo and fingerprint), sign and wait for a couple of weeks to pick up your passport at the Civil Registry.

Except

Since we´re only on holiday in Iceland a couple of weeks of waiting is not a real option. So to amend this my wife investigated and proposed the solution to sent the new passport to the consulate in our country. An option, once validated by the team lead, that was acceptable to the civil clerk. And thus the proper check box was looked for and found.

Into the process

After filling in the personal details instead of the offices address the address of the consulate was needed. The page itself did not offer any listing. The help page wasn´t really helpful either as it only pointed towards a government listing at another department. After some searching the consulate in Amsterdam and its address was found and the data could be entered. So everything was entered and the OK could be clicked. Nothing happened. Looking over the page the clerk found:

Færðu inn lögboðnar reit (Please enter mandatory data) next to a field asking for Póstnúmer (Zip code) that had been left empty as it had also been empty on the government listing. So what to do? My wife and the clerks colleague suggested to google it. And so she did. The zip code was entered and again the OK was clicked. The intranet page jumped back to the entry page and everything looked okay. But the clerk rightfully noted that the usual confirmation message was not shown and checked my wifes file. To her, and my wifes surprise no data was added, meaning the whole 20 minute worth of data was absent.

The process repeated itself a few times and eventually another colleague noted that the zip code contained letters. Something not used in Iceland itself. Why not leave those out of the field and move them somewhere else, say in front of Amsterdam. Now when clicking OK the confirmation appeared and a check showed that the file now contained all the data.

YAGNI

Even with only a couple of thousand Icelanders living abroad chances that they live in one of the eight countries (e.g. Canada, Great-Britain) using alpha numeric characters is realistic especially since many more countries use the country abbreviation in front of their zip code. So when my wife returned to tell about her plight she commented: Clearly neither the developer nor tester thought this field was important. But it really bugged me today. Further more she noted The same software company maintained the government listing and had all the zip codes removed, leaving empty spaces in the listing. That´s even more stupid.

Clearly someone must have convinced the developers and testers “You Ain´t Gonna Need It” (YAGNI).