It’s all about bugs, right? So what are they?

The previous blog post had a number of definitions of software testing in it that focus on testing having bug finding as a purpose. As such this is not so strange since many people see finding or rather calling out bugs is the most visible part of software testing. Whether you agree with this or not one thing to me seems certain bugs are an important subject in software testing. This post then is all about bugs or rather it is about, the many definitions of (software) bugs.

  • “This thing gives out and then that. ‘Bug’—as such little faults and difficulties are called—show themselves, and months of anxious watching, study, and labor are requisite before commercial success—or failure—is certainly reached.” Edison letter to Pushkas [Edison, 1878]
  • “A software error is present when the program does not do what its end user reasonably expects it to do” Myers, 1976, p. 6 The art of software testing [1976]
  • “There can never be an absolute definition for bugs, nor an absolute determination of their existence. The extent to which a program has bugs is measured by the extent to which it fails to be useful. This is a fundamentally human measure” Beizer, p. 12). Software Systems and Quality Assurance [1984]
  • Fault: “An incorrect step, process, or data definition in a computer program. Note: This definition is used primarily by the fault tolerance discipline. In common usage, the terms ‘error’ and ‘bug’ are used to express this meaning.”
    Failure: “The inability of a system or component to perform its required functions within specified performance requirements. Note: The fault tolerance discipline distinguishes between a human action (a mistake), its manifestation (a hardware or software fault), the result of the fault ( a failure), and the amount by which the result is incorrect.” IEEE Standard 610.12-1990 [1990]
  • “A mismatch between the program and its specification is an error in the program if and only if the specification exists and is correct.” Kaner, Falk and Nguyen, Testing Computer Software [1993]
  • “Defect. Any state of unfitness for use, or nonconformance to specification.
    Error. (1) The difference between a computed, observed, or measured value and the true, specified, or theoretically correct value or condition. (2) An incorrect step, process, or data definition. Often called a bug. (3) An incorrect result. (4) A human action that produces an incorrect result. Note: One distinction assigns definition (1) to error, definition (2) to fault, definition (3) to failure, and definition (4) to mistake. Fault. An incorrect step, process, or data definition in a computer program. See also: error. Failure. Discrepancy between the external results of a program’s operation and the software product requirements. A software failure is evidence of the existence of a fault in the software.” Software Error Analysis, NIST [1993]
  • “A software defect is a “material breach” of the contract for sale or license of the software if it is so serious that the customer can justifiably demand a fix or can cancel the contract, return the software, and demand a refund.” Kaner, Software QA Magazine  [1996]
  • “The difference between the actual test result and the expected result can indicate a software product defect” Koomen and Pol, Test Process Improvement [1999]
  • “An observed difference between an expectation (prediction) and its actual outcome. A finding can be made on different objects, such as the test base (upon intake), while testing the system, on the test infrastructure, etc.” (translation) Pol, Teunissen, van Veenendaal, Testen volgens TMap 2e druk [2000]
  • “An error is clearly present if a program does not do what it is supposed to do, but errors are also present if a program does what it is not supposed to do.” Meyer, The art of software testing (2nd edition) [2004]
  • “A generic term that can refer to either a fault (cause) or a failure (effect). (IEEE 982.1-2005 IEEE Standard Dictionary of Measures of the Software Aspects of Dependability, 2.1) Example: (1) omissions and imperfections found during early life cycle phases and (2) faults contained in software sufficiently mature for test or operation” [2005]
  • “A bug, also referred to as a software bug, is an error or flaw in a computer program that may prevent it from working correctly or produce an incorrect or unintended result.” The Linux Information Project [2005]
  • “A defect (fault) is the result of an error residing in the code or document.”Broekman, van der Aalst, Vroon and Koomen, TMap Next for result driven testing [2006]
  • “A bug is anything about the program that threatens its value.” Bach and Bolton, Rapid Software Testing [2007]
  • Anything that causes an unnecessary or unreasonable reduction of the quality of a software product. BBST Bug Advocacy [2008]
  • “Defects are observations that the test results deviate from expectations” Bouman, SmarTEST [2008]
  • “A finding is an observed difference between an expectation or prediction and the actual outcome” Van der Aalst, Baarda, Roodenrijs, Vink and Visser, TMap NEXT Business Driven Test Management [2008]
  • “something that went wrong,” Alan Page. Ken Johnston. Bj Rollison, How we test software at Microsoft [2009]
  • “Situation that may cause errors to occur in an object” (ISO/IEC 10746-2:2009 Information technology — Open Distributed Processing — Reference Model: Foundations, 13.6.3) [2009]
  • Imperfection or deficiency in a work product where that work product does not meet its requirements or specifications and needs to be either repaired or replaced (IEEE 1044-2009 IEEE Standard Classification for Software Anomalies, 2) [2009]
  • An attribute of a software product that reduces its value to a favored stakeholder or increases its value to a disfavored stakeholder without a sufficiently large countervailing benefit. BBST Foundations [2010]
  • A software error is the failure to comply with an assured characteristic (translation)
    Ein Softwarefehler ist die Nichterfuellung einer zugesicherten Eigenschaft (eigenschaftsbezogene Fehlerdefinition). Basiswissen Software Testen [Spillner 2010]
  • A product has an error if it does not provide the security that when, taking into account all circumstances, in particular its performance, it can be used as reasonably can be legitimately expected, at the timing in which it was placed on the market. (translation) Ein produkt hat einen Fehler, wenn es nicht die Sicherheit bietet, die unter Berücksichtigung aller Umstande, ins besondere seiner Darbietung, des Gebrauchs, mit dem billigerweise gerechnet werden kann, des Zeitpunktes, in dem es in den Verkehr gebracht wurde, berechtigterweise erwartet werden kann [BGB01, ProdukthaftungsG. par. 3] (risikobesogene Fehlerdefinition). Basiswissen Software Testen [Spillner 2010]
  • Defect in a hardware device or component (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) [2010]
  • Manifestation of an error in software (ISO/IEC/IEEE 24765:2010 Systems and software engineering–Vocabulary) [2010]
  • A Software Defect / Bug is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable). Software Testing Fundamentals [2010]
  • Incorrect step, process, or data definition in a computer program (ISO/IEC 25040:2011 Systems and software engineering–Systems and software Quality Requirements and Evaluation (SQuaRE)–Evaluation process, 4.27) [2011]
  • Defect in a system or a representation of a system that if executed/activated could potentially result in an error (ISO/IEC 15026-1:2013 Systems and software engineering–Systems and software assurance–Part 1: Concepts and vocabulary, 3.4.5) [2013]
  • Software bugs can originate from every step in translation from natural language, when:
    • the Syntax-rules were not followed
    • the translation falsified the Semantics, or
    • if the Pragmatics are unclear
      Software-Fehlertoleranz und Zuverlässigkeit [2013]
  • A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. Wikipedia [2015]
  • “the people who wrote the program told the computer to do something it couldn’t do”. Practical Programming [Gries, Campbell, Montojo 2015]
  • An imperfection or deficiency in a project component where that component does not meet its requirements or specifications and needs to be either repaired or replaced (A Guide to the Project Management Body of Knowledge (PMBOK(R) Guide) — Fifth Edition)
  • A software bug is a problem causing a program to crash or produce invalid output. Techopedia

What is testing?

Over time the definitions of testing and the ideas behind it have changed.
This post brings together an expanding collection of testing definitions in chronological order and thus provide an overview on changing views. Views that are based on the changes in computer and software development one would expect to be outdated but often still are in use today.

  • “How would we know a program exhibits intelligence?
    Alan Turing [1950]
  • “Make sure the program runs” and “Make sure the program solves the problem”
    Dan McCracken; Digital Computer Programming [1957]
  • “Errors plague us, but hidden errors make our job impossible. One of our recurring problems lies not in finding errors but in not finding errors. We must be alert for them; we can never be complacent about our results.”
    Herbert D. Leeds and Gerald M. Weinberg, [1961]
  • “Testing is the process of executing a program with the intent of finding errors.”
    Glenford J. Meyers; The art of software testing [1979]
  • “Test means execution, or set of executions, of the program for the purpose of measuring its performance. That a program was executed with no evidence of error is no proof that it contains no errors; program errors are sensitive to the specifics of the data being processed.” “The goal of testing ought to be the uncovering of defects within the program.”
    Robert Dunn and Richard Ullman, Quality Assurance for Computer Software, [1982]
  • “Testing is any activity aimed at evaluating an attribute of a program or system. Testing is the measurement of software quality.”
    Bill Hetzel, The Complete Guide to Software Testing [1983]
  • “Testing is the act of executing tests. Tests are designed and then executed to demonstrate correspondence between an element and its specification. There can be no testing without specifications of intentions.”
    Boris Beizer, Software System Testing and Quality Assurance [1984]
  • “The Testing stage involves full-scale use of the program in a live environment. It is here that the software and hardware are shaken down, anomalies of behavior are eliminated, and the documentation is updated to reflect final behavior. The testing must be as thorough as possible. The use of adversary roles at this stage is an extremely valuable tool because it ensures that the system works in as many circumstances as possible.”
    Henry Legard, Software Engineering Concepts: Volume 1 [1987]
  • “The purpose of testing a program is to find problems in it”
    Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software [1988]
  • “The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.”
    The 1990 IEEE Standard Glossary of Software Engineering Terminology [1990]
  • “Testing is the execution of software to determine where if functions incorrectly.”
    Robert L. Glass, Building Quality Software [1992]
  • “Testing helps detect defects that have escaped detection in the preceding phases of development. Here again, the key byproduct of testing is useful information about the number of and types of defects found in testing. Armed with this data, teams can begin to identify the root causes of these defects and eliminate them from the earlier phases of the software life cycle.”
    Lowell Jay Arthur, Improving Software Quality: An Insider’s Guide to TQM [1993]
  • “[Unit] Testing is a hard activity for most developers to swallow for several reasons: Testing’s goal runs counter to the goals of other development activities. The goal is to find errors. A successful test is one that breaks the software. … Testing can never prove the absence of errors — only their presence. … Testing by itself does not improve software quality. … Testing requires you to assume that you’ll find errors in your code. …”
    Steve McConnell, Code Complete: A Practical Handbook of Software Construction [1993]
  • “Testing is an unnecessary and unproductive activity if its sole purpose is to validate that the specifications were implemented as written. … testing as performed in most organizations is a process designed to compensate for an ineffective software development process. It is unrealistic to develop software and not test it. The perfect development process does not exist …”
    William Perry, Effective Methods for Software Testing [1995]
  • “A tester is given a false statement (‘the system works’) and has the job of selecting, from an infinite number of possibilities, an input that contradicts the statement. … [You want to find] the right counterexample with a minimum of wasted effort.”
    Brian Marick, The Craft of Software Testing: Subsystem Testing [1995]
  • “Testing is obviously concerned with errors, faults, failures, and incidents. A test is the act of exercising software with test cases. There are two distinct goals of a test: either to find failures, or to demonstrate correct execution.”
    Paul C. Jorgensen, Software Testing: A Craftsman’s Approach [1995]
  • “The penultimate objective of testing is to gather management information.”
    Boris Beizer, Black Box Software Testing: Techniques for Functional Testing of Software and Systems [1995]
  • “The most common quality-assurance practice is undoubtedly execution testing, finding errors by executing a program and seeing what it does.”
    Steve McConnell, Rapid Development: Taming Wild Software Schedules [1996]
  • “Software testing is the action of carrying out one or more tests, where a test is a technical operation that determines one or more characteristics of a given software element or system, according to a specified procedure. The means of software testing is the hardware and/or software and the procedures for its use, including the executable test suite used to carry out the testing.”
    NIST [1997]
  • “Software must be tested to have confidence that it will work as it should in its intended environment. Software testing needs to be effective at finding any defects which are there, but it should also be efficient, performing the tests as quickly and cheaply as possible.”
    Mark Fewster and Dorothy Graham, Software Test Automation: Effective use of test execution tools [1999]
  • “Testing is the process by which we explore and understand the status of the benefits and the risk associated with release of a software system.”
    James Bach, James Bach on Risk-Based Testing, STQE Magazine [1999]
  • “Testing is a process of planning, preparing, executing and evaluating, with the purpose to determine the attributes of an information system and to show the differences between its actual and demanded state” Pol, Teunissen, van Veenendaal, Testen volgens TMap 2e druk [2000]
  • “Software testing is a difficult endeavor that requires education, skill, practice, and experience. Building good testing strategies requires merging many different disciplines and techniques.”
    James A. Whittaker, IEEE Software (Vol 17, No 1) [2000]
  • “Software testing is the process of applying metrics to determine product quality. Software testing is the dynamic execution of software and the comparison of the results of that execution against a set of pre-determined criteria.”
    NIST The Economic Impacts of Inadequate Infrastructure for Software Testing [2002]
  • “Testing is done to find information. Critical decisions about the project or the product are made on the basis of that information.”
    Cem Kaner, James Bach, Bret Pettichord,
    Lessons Learned In Software Testing: A Context-Driven Approach [2002]
  • “Testing is a concurrent lifecycle process of engineering, using and maintaining testware in order to measure and improve the quality of the software being tested.”
    – Rick Craig and Stefan Jaskiel, Systematic Software Testing [2002]
  • “A software tester’s job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn’t do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions.”
    Brett Pettichord, Don’t Become the Quality Police, [2002]
  • “Software testing is a process of analyzing or operating software for the purpose of finding bugs.”
    – Robert Culbertson, Chris Brown, and Gary Cobb, Rapid Testing [2002]
  • “How do you test your software? Write an automated test.
    Test is a verb meaning ‘to evaluate’. No software engineers release even the tiniest change without testing, except the very confident and the very sloppy. … Although you may test your changes, testing changes is not the same as having tests. Test is also a noun, ‘a procedure leading to acceptance or rejection’. Why does test the noun, a procedure that runs automatically, feel different from test the verb, such as poking a few buttons and looking at answers on the screen?”
    – Kent Beck, Test-Driven Development By Example [2003]
  • “My conclusion is that testing is an intellectual endeavor and not part of arts and crafts. So testing is not something anyone masters. Once you stop learning, your knowledge becomes obsolete very fast. Thus, to realize your testing potential, you must commit to continuous learning.”
    James A. Whittaker, How to Break Software: A Practical Guide to Testing [2003]
  • “All software has bugs. It’s a fact of life. So the goal of finding and removing all defects in a software product is a losing proposition and a dangerous objective for a test team, because such a goal can divert the test team’s attention from what is really important. … [The goal of a test team] is to ensure that among the defects found are all of those that will disrupt real production environments; in other words, to find the defects that matter.”
    Scott Loveland, Geoffrey Miller, Richard Prewitt, and Michael Shannon,
    Software Testing Techniques: Finding the Defects that Matter [2004]
  • “Software Testing [is] The act of confirming that software design specifications have been effectively fulfilled and attempting to find software faults during execution of the software.”
    Thomas H. Faris, Safe And Sound Software: Creating an Efficient and Effective Quality System [2006]
  • “Checking is the inspection of intermediate products and development processes, or verification. These all the activities directed to answering the question: Is the building done rightly?
    Testing is the inspection of end products, or validation. Is the product valid against its demand? Is the right thing build?” (translation) Koomen, Pol Test Process Improvement [2006]
  • “Software testing is a process where we check a behavior we observe against a specified behavior the business expects. … in software testing, the tester should know what behavior to expect as defined by the business requirements. We agree on this defined, expected behavior and any user can observe this behavior.”
    Andreas Golze, Charlie Li, Shel Prince, Optimize Quality for Business Outcomes: A Practical Approach to Software Testing [2006]
  • “Testing is a process of gathering information by making observations and comparing them to expectations.” “A test is an experiment designed to reveal information, or answer a specific question, about the software or system.”
    Dale Emery and Elizabeth Hendrickson [2007]
  • “software testing as a process of technical investigation, an empirical study of the product under test with the goal of exposing quality-relating information about it. (Validation and verification both fit within this definition.)”
    LAWST [2007]
  • “[Testing is]  An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product under test.  A process focused on empirically questioning a product or service about its quality.”
    Cem Kaner [2007]
  • “An empirical, technical investigation conducted to provide stakeholders with information about the quality of the product OR SERVICE under test.”
    Scott Barber [2007]
  • “Testing is strategic tool for risk control”
    Egbert Bouman, SmarTEST [2008]
  • “The test process has to produce information about errors found and the degree to which the quality has been improved by solving them. In turn, the tests have to indicate whether the modified system responds to the reason that led to the change, and if it contributes to the business requirements.” de Grood, TestGoal [2008]
  • “Testing is a process that provides insight into, and advices about, quality and its related risks” Van der Aalst, Baarda, Roodenrijs, Vink and Visser, TMap NEXT Business Driven Test Management [2008]
  • “The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. ”
    Jeanne Hofmans and Erwin Pasmans, Quality Level Management [2012]
  • “interact with the software or system, observe its actual behavior, and compare that to your expectations”
    Elisabeth Hendrickson; ‘Explore It! [2013]
  • “Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.”
    James Bach, Michael Bolton [2013]
  • “Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. The information is compared against specifications, business requirements, competitive products, past versions of this product, user expectations, industry standards, applicable laws and other criteria.”
    Undisclosed company policy [2015]
  • Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.
    Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.
    Cem Kaner, James Bach
  • “Software testing is a process of executing a program or application with the intent of finding the software bugs” and “the process of validating and verifying that a software program or application or product: Meets the business and technical requirements that guided it’s design and development; Works as expected; Can be implemented with the same characteristic”
    ISTQB Foundation Exam
  • “Testing is finding out how well something works”
    Origin unknown
  • “Testing is the process of evaluating a system or its component(s) with the intent to find out that whether it satisfies the specified requirements or not”
    Origin unknown
  • “Software testing is the process of evaluating a software item to detect differences between given input and expected output”
    Origin unknown
  • “Software testing is a process that should be done during the development process”
    Origin unknown

Reference test

Regression Testing

A while ago a new developer approached me to discuss some changes he wanted to deploy to the test environment. He had re-factored parts of the application workflow management and redesigned a number of reports. We talked about it and he stressed that these where major changes and a lot of work to test. While we continued discussing I commented “that’s okay it will take some time but we have a regression test set just for this that we can use to work with and that will help us…

He looked at me with total bewilderment and uttered: “Didn’t you get anything of what I said. These are major changes. No regression test will cover that! They will all fail!!

I looked at him and realized…

He was right

His understanding is based on a common perception where regression tests mean something like running the same step by step tests as before to see that they have the same step by step results as before. And if that were the case he would obviously be right. There was no way I could execute the tests the same way as before simply because the workflow method and GUI functionality had changed dramatically. Nor could the results be exactly the same given the design changes he had made.

But I was right too

His reaction made me realize that over the years my understanding of a regression test set has changed dramatically. I don’t see test cases as something that provides a step by step description. I see test cases as a doable and executable follow-up of a test idea. A test case to me consists of the following elements:

  • A test idea; what do you want to learn, verify/validate or investigate?
  • A feature; which part of the test object do you investigate?
  • Test activities; a way to exercise your test idea and see how the software behaves
  • Trigger data; data that should trigger the behavior you want to see
  • An oracle; something that tells you how to value the behaviors information/data

With that understanding I looked at the changes and had concluded that the existing test cases were still useful. Even if the workflow management was re-factored I could still apply the test ideas we had used before. In essence the same features were affected and my oracles should still be valid. To me adjusting the test activities and trigger data, in this case to match the workflow changes, is something I tend to do anyway. With these adjustments I try to make re-running test cases more effective as variation adds new chances of finding new or different bugs.

With the reports the differentiation was slightly different. Test ideas, features, test activities and trigger data could stay more or less the same but my oracles, the current template reports, had changed. The reports now had a new layout but their content had changed only minimally.

Reference test

To avoid future confusion I came up with a new name for my tests which, as far as I am aware, has not been used in software testing before. I will call these tests Reference Tests.

A Reference Test then is a test where under similar circumstances similar input results in a similar outcome of the test if evaluated against an identical oracle

I am aware that ‘similar’ and ‘identical’ in this definition are relative concepts and I have chosen them on purpose. It expresses that each time that such a test is used the tester needs to be aware which information it tries to show or uncover, how it is doing that and to what purpose. This discourages mindless repetition or pass/fail blindness. It encourages thoughtful selection and execution of tests and deliberate evaluation of test results.





Exploratory Testing is not the antithesis of structured testing

I am getting tired of fellow software testers telling, stating, writing that exploratory testers do not like structured testing, or that exploratory testing is the opposite of structured testing, or that exploratory testing can only be done based on experience or domain knowledge, and on and on….

Exploratory testing is, like ‘traditional testing’, not also based on risk analysis, requirements, quality attributes and test design techniques. It does not ignore or oppose these approaches. Exploratory testing just also reaches beyond that and is also based on modeling, using oracles, using heuristics, time management, providing stakeholder relevant information, and much more.

Exploratory testing doesn’t spent unnecessary time on writing specific stepwise test cases in advance. It rather works with test ideas which it critically investigates while keeping an open mind on what is observed during execution. Exploratory testing then uses the information to create new and additional test ideas, change direction or raise bugs. But it always aims to use the results to provide relevant information to stakeholders that enables them to take decisions or meet their targets. And that can be a verbal account or a brief note but is more likely to be stakeholder specific test execution accounts, showing test results related to achieving the stakeholders acceptance criteria, (business) goals and mission It accounts how much is done, could not be done and how much still should/needs to be done both in terms of progress and coverage.

Exploratory testing is no free pass to click around in the software. Exploratory testing is both highly structured and flexible and it is flexible enough to change along the way so it can provide the most value information possible to the stakeholders.


Exploratory Testing is not the antithesis of structured testing

To do exploratory testing well you have to work structured, disciplined and flexible at the same time. That’s what makes exploratory testing hard to do but lots of fun at the same time.

You don’t just have to take my word for it. Many have written about it before, see some examples below, but the best way to get convinced is to learn and experience it. So I challenge you to go out and do it seriously and with engagement. If you don’t know how many colleagues or myself are more than happy to show you.

Further reading

Best book:

And just to prevent wrong ideas.
Test ideas can, depending on context, be more or less detailed and almost look like scripts even while their execution is not.
Also testers that prefer exploratory testing can use checklists or scripts if that better serves the need for information for the stakeholders. Although I think information transfer is better served by putting relevant detail in the reports and not in the test cases.


Why is software development (not) like building a house?

While this blog mostly addresses software testing that is not the only thing I do.
I am also Scrum Master and provide agile coaching. This post is about a topic I have run across particularly in agile environments but is also used in discussions about other development practices.

Why isn’t software development like building a house!?

Building a house starts at the foundation not at the roof!

Houses, bridges and building in general is predictable, why isn’t software building?!

These remarks and similar ones express the discontent some have with software development in general and agile software development in particular. A discontent that I believe arises as a side effect of two agile development basic elements: “transparency” and “prioritization”. Transparency has the effect that a lot of things that usually take place behind the scenes or that are handled implicitly become visible to a wider audience. Prioritization in agile regularly breaks with the seeming simplicity and logical sequentially of traditional software development as it focusses on delivering valuable but partial solutions first rather than complete solutions later. And it does so visibly.

The simplicity of building…

So is building a house really that simple?
Populistic concepts edict that building follows these, or similar, steps:

  • Pouring a foundation
  • Erecting the walls
  • Flooring
  • Adding a roof
  • Closing up with windows and doorsBuilding-Construction

Occasionally an architect, electricity and plumbing is added but not much more. In any case building is considered fairly simple and straightforward and essentially the question is raised why software development isn’t?

The answer to that question is actually simple.
If software development were only compiling the final build and deploying that to production it would be as simple. But software development is not only creating a build an deploying it and neither is building a house if you look further.

A more realistic view on building a house

Building a house doesn’t start with pouring a foundation, it doesn’t even start with the blueprint the architect makes. It started earlier with the design and production of all the materials the architect and the builder chose for construction of the house. Concrete, bricks, wooden beams, pipes, cables, lighting, etc. did not appear out of thin air and ready for use. All of these materials have undergone their own designing, prototyping, testing, and fabrication process before they could be picked by an architect and used by a builder on site. Let’s take a closer look at bricks to illustrate this.

In the Netherlands and Belgium alone there are over 45 different sizes of bricks, there are many mixtures of ingredients (clay, sand, minerals, water,etc.), colors, and textures. While some of the differences are merely decorative many of them are functional. And that is just the brick itself. Consider also all the possible ways of stacking them and all the kinds of cement that can be used to keep them together that took years and years of practical (live) tests.Wall_bonds

So before any first brick is laid not only the architect and builder have chosen which brick to use and how to use it. The manufacturer of the brick has also implemented design specifications, chose a mixture of ingredients, chose a way to manufacture it and had it tested for durability, fire proofing, insulation and structural integrity.

This is so for for most if not all materials used in building. Some of them may be more or less generic but all of them eventually have a specific usage in building a specific house.

Comparing building a house to software development

So how does building house then compare to software development. Well building a house basically can be seen as the sum of all individual material developments integrated into constructing a house. When looking at it in that way it is not very unlike software development of an application. In software development the individual components of the application are developed and eventually integrated into a full application.

Yet in practice building a house is not conceived as similar to software development. And that is not surprising. While in software development components are often developed as part of the same project in which the application is delivered. While in building the individual materials are most likely developed and manufactured separated both in place and time from the actual construction of the building.

Another similarity between building a house and developing software is that, contrary to the simplistic view, both are hardly ever right first time. In fact errors are so common in building that professions and consumer societies exist that are largely based on its shortcomings. Like software development, where there are software testers, auditors etc., building also has its inspectors, overseers etc..

A difference between software development and building a house is the visible duration. While building (and deploying) software can take a few minutes to up to a few days. Building a house takes a few days up to several weeks. On the other of side of the medal preparation of building a house is largely unseen. Which is quite the opposite to developing software. Getting up to a deployable build can last a few days to several months and sometimes up to several years.


I think the continued use of technical references and terminology in software testing spurs the comparison between software development and building. It seems to me that the remarks with which the post started find their basis in a incomplete vision. A vision that is based on contextual experiences. Working in an IT environment lets you see what a struggle software development is. And when your vision of house building comes from an owners perspective you are likely only to take the final construction as reference with all its supposed simplicity.

Even when the development processes leading up to construction is taken into account there still are significant differences in time, space and execution that outweigh the similarities.

Non functional list of ideas

A while ago the lead business analyst in my project asked me if I could supply him with a list of non functional tests.
While aware that he probably meant if I could supply him with the non functional tests defined, executed and/or logged for our project I did not have a lot of time to search our test ware for them and I gave him the below list of definitions to digest as a starter.

I shared that list so the items on it can be used as a list to draw test ideas from. Since the initial publication I decided that this should be a living document. My first addition is based on the list of Software Quality Characteristics by Rikard Edgren, Henrik Emilson and Marin Janson (The testeye). I would like to continue to add definitions and possibly the supporting items that the Testeye added to theirs for each of them.

accessibility: usability of a product, service, environment or facility by people with the widest range of capabilities

accountability: degree to which the actions of an entity can be traced uniquely to the entity

adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified

availability: The degree to which a component or system is operational and accessible when required for use.

capacity: degree to which the maximum limits of a product or system parameter meet requirements

capability: Can the product perform valuable functions?

Completeness: all important functions wanted by end users are available.
Accuracy: any output or calculation in the product is correct and presented with significant digits.
Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
Interoperability: different features interact with each other in the best way.
Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
Data agnosticism: supports all possible data formats, and handles noise
Extensibility: ability for customers or 3rd parties to add features or change behavior.

change ability: The capability of the software product to enable specified modifications to be implemented.

charisma: Does the product have “it”?

Uniqueness: the product is distinguishable and has something no one else has.
Satisfaction: how do you feel after using the product?- Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?

compatibility: degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment

compatibility: How well does the product interact with software and environments?

Hardware Compatibility: the product can be used with applicable configurations of hardware components.
Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
Application Compatibility: the product, and its data, works with other applications customers are likely to use.
Configuration Compatibility: product’s ability to blend in with configurations of the environment.
Backward Compatibility: can the product do everything the last version could?
Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.

confidentiality: degree to which a product or system ensures that data are accessible only to those authorized to have access 

flexibility: the ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed

install ability: The capability of the software product to be installed in a specified environment

integrity: degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data

interoperability: The capability of the software product to interact with one or more specified components or systems

IT-ability: Is the product easy to install, maintain and support?

System requirements: ability to run on supported configurations, and handle different environments or missing components.
Installability: product can be installed on intended platforms with appropriate footprint.
Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
Uninstallation: are all files (except user’s or system files) and other resources should be removed when uninstalling?
Configuration: can the installation be configured in various ways or places to support customer’s usage?
Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
Maintainability: are the product and its artifacts easy to maintain and support for customers.
Testability: how effectively can the deployed product be tested by the customer?

learnability: The capability of the software product to enable the user to learn its application

localizability: How economical will it be to adapt the product for other places 

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment

maintainability: Can the product be maintained and extended at low cost?

Flexibility: the ability to change the product as required by customers.
Extensibility: will it be easy to add features in the future?
Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
Readability: the code is adequately documented and easy to read and understand.
Transparency: Is it easy to understand the underlying structures?- Modularity: the code is split into manageable pieces.
Refactorability: are you satisfied with the unit tests?
Analyzability: ability to find causes for defects or other code of interest.

modifiability: ease with which a system can be changed without introducing defects

modularity: degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components

non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability

operability: The capability of the software product to enable the user to operate and control it

performance: Is the product fast enough?

Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
Stress handling: how does the system cope when exceeding various limits?Responsiveness: the speed of which an action is (perceived as) performed.
Availability: the system is available for use when it should be.
Throughput: the products ability to process many, many things.
Endurance: can the product handle load for a long time?
Feedback: is the feedback from the system on user actions appropriate?
Scalability: how well does the product scale up, out or down?

pleasure: degree to which a user obtains pleasure from fulfilling personal needs

portability: The ease with which the software product can be transferred from one hardware or software environment to another

portability: Is transferring of the product to different environments and languages enabled?

Reusability: can parts of the product be re-used elsewhere?
Adaptability: is it easy to change the product to support a different environment?
Compatibility: does the product comply with common interfaces or official standards?
Internationalization: it is easy to translate the product.
Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
User Interface-robustness: will the product look equally good when translated?

recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure

reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations

reliability: Can you trust the product in many and difficult situations?

Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
Robustness: the product handles foreseen and unforeseen errors gracefully.
Recoverability: it is possible to recover and continue using the product after a fatal error.
Resource Usage: appropriate usage of memory, storage and other resources.
Data Integrity: all types of data remain intact throughout the product.
Safety: the product will not be part of damaging people or possessions.
Disaster Recovery: what if something really, really bad happens?
Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment

reusability: degree to which an asset can be used in more than one system, or in building other assets

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions

safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use

satisfaction: freedom from discomfort and positive attitudes towards the use of the product

scalability: The capability of the software product to be upgraded to accommodate increased loads

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data

security: Does the product protect against unwanted usage?

Authentication: the product’s identifications of the users.
Authorization: the product’s handling of what an authenticated user can see and do.
Privacy: ability to not disclose data that is protected to unauthorized users.
Security holes: product should not invite to social engineering vulnerabilities.
Secrecy: the product should under no circumstances disclose information about the underlying systems.
Invulnerability: ability to withstand penetration attempts.
Virus-free: product will not transport virus, or appear as one.
Piracy Resistance: no possibility to illegally copy and distribute the software or code.
Compliance: security standards the product adheres to.

stability: The capability of the software product to avoid unexpected effects from modifications in the software

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives

supportability: Can customer’ usage and problem be supported?

Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
Diagnostics: is it possible to find out details regarding customer situations?
Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
Debugging: can you observe the internal states of the software when needed?
Versatility: ability to use the product in more ways than it was originally designed for.

survivability: degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks

testability: The capability of the software product to enable modified software to be tested

testability: Is it easy to check and test the product?

Traceability: the product logs actions at appropriate levels and in usable format.
Controllability: ability to independently set states, objects or variables.
Isolateability: ability to test a part by itself.- Observability: ability to observe things that should be tested.
Monitorability: can the product give hints on what/how it is doing?- Stability: changes to the software are controlled, and not too frequent.
Automation: are there public or hidden programmatic interface that can be used?- Information: ability for testers to learn what needs to be learned…
Auditability: can the product and its creation be validated?

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests

usability: extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use

usability: Is the product easy to use?

Affordance: product invites to discover possibilities of the product.
Intuitiveness: it is easy to understand and explain what the product can do.
Minimalism: there is nothing redundant about the product’s content or appearance.
Learnability: it is fast and easy to learn how to use the product.
Memorability: once you have learnt how to do something you don’t forget it.
Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
Operability: an experienced user can perform common actions very fast.
Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
Control: the user should feel in control over the proceedings of the software.
Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
Consistency: behavior is the same throughout the product, and there is one look & feel.
Tailorability: default settings and behavior can be specified for flexibility.
Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
Documentation: there is a Help that helps, and matches the functionality.

Any additions to this list or comments on the definitions used are welcome.


    SEARCH ITEMS ISO 29119 AND IS0 25010
  • Software Quality Characteristics and Internal Software Quality Characteristics by Rikard Edgren, Henrik Emilson and Martin Jansson in The little black book on test design