About Arborosa

Arborosa is a software tester, walking fanatic, father of two and husband living in Utrecht in the Netherlands. This blog is intended to display my thoughts and opinions on software testing, books, blogs, experiences and anything else that I find interesting enough to write and publish about.

Exploratory Testing is not the antithesis of structured testing

I am getting tired of fellow software testers telling, stating, writing that exploratory testers do not like structured testing, or that exploratory testing is the opposite of structured testing, or that exploratory testing can only be done based on experience or domain knowledge, and on and on….

Exploratory testing is, like ‘traditional testing’, not also based on risk analysis, requirements, quality attributes and test design techniques. It does not ignore or oppose these approaches. Exploratory testing just also reaches beyond that and is also based on modeling, using oracles, using heuristics, time management, providing stakeholder relevant information, and much more.

Exploratory testing doesn’t spent unnecessary time on writing specific stepwise test cases in advance. It rather works with test ideas which it critically investigates while keeping an open mind on what is observed during execution. Exploratory testing then uses the information to create new and additional test ideas, change direction or raise bugs. But it always aims to use the results to provide relevant information to stakeholders that enables them to take decisions or meet their targets. And that can be a verbal account or a brief note but is more likely to be stakeholder specific test execution accounts, showing test results related to achieving the stakeholders acceptance criteria, (business) goals and mission It accounts how much is done, could not be done and how much still should/needs to be done both in terms of progress and coverage.

Exploratory testing is no free pass to click around in the software. Exploratory testing is both highly structured and flexible and it is flexible enough to change along the way so it can provide the most value information possible to the stakeholders.

So…

Exploratory Testing is not the antithesis of structured testing

To do exploratory testing well you have to work structured, disciplined and flexible at the same time. That’s what makes exploratory testing hard to do but lots of fun at the same time.

You don’t just have to take my word for it. Many have written about it before, see some examples below, but the best way to get convinced is to learn and experience it. So I challenge you to go out and do it seriously and with engagement. If you don’t know how many colleagues or myself are more than happy to show you.

Further reading

http://www.satisfice.com/articles/what_is_et.shtml
http://www.kaner.com/pdfs/QAIExploring.pdf
http://university.utest.com/exploratory-testing-the-basics/
http://www.developsense.com/blog/category/exploratory-testing/?submit=Search
http://www.slideshare.net/codecentric/exploratory-testing-inagileoverviewmeettheexpertselisabethhendrickson
http://lets-test.com/?p=2335

Best book:
https://pragprog.com/book/ehxta/explore-it

And just to prevent wrong ideas.
Test ideas can, depending on context, be more or less detailed and almost look like scripts even while their execution is not.
Also testers that prefer exploratory testing can use checklists or scripts if that better serves the need for information for the stakeholders. Although I think information transfer is better served by putting relevant detail in the reports and not in the test cases.

 

Why is software development (not) like building a house?

While this blog mostly addresses software testing that is not the only thing I do.
I am also Scrum Master and provide agile coaching. This post is about a topic I have run across particularly in agile environments but is also used in discussions about other development practices.

Why isn’t software development like building a house!?

Building a house starts at the foundation not at the roof!

Houses, bridges and building in general is predictable, why isn’t software building?!

These remarks and similar ones express the discontent some have with software development in general and agile software development in particular. A discontent that I believe arises as a side effect of two agile development basic elements: “transparency” and “prioritization”. Transparency has the effect that a lot of things that usually take place behind the scenes or that are handled implicitly become visible to a wider audience. Prioritization in agile regularly breaks with the seeming simplicity and logical sequentially of traditional software development as it focusses on delivering valuable but partial solutions first rather than complete solutions later. And it does so visibly.

The simplicity of building…

So is building a house really that simple?
Populistic concepts edict that building follows these, or similar, steps:

  • Pouring a foundation
  • Erecting the walls
  • Flooring
  • Adding a roof
  • Closing up with windows and doorsBuilding-Construction

Occasionally an architect, electricity and plumbing is added but not much more. In any case building is considered fairly simple and straightforward and essentially the question is raised why software development isn’t?

The answer to that question is actually simple.
If software development were only compiling the final build and deploying that to production it would be as simple. But software development is not only creating a build an deploying it and neither is building a house if you look further.

A more realistic view on building a house

Building a house doesn’t start with pouring a foundation, it doesn’t even start with the blueprint the architect makes. It started earlier with the design and production of all the materials the architect and the builder chose for construction of the house. Concrete, bricks, wooden beams, pipes, cables, lighting, etc. did not appear out of thin air and ready for use. All of these materials have undergone their own designing, prototyping, testing, and fabrication process before they could be picked by an architect and used by a builder on site. Let’s take a closer look at bricks to illustrate this.

In the Netherlands and Belgium alone there are over 45 different sizes of bricks, there are many mixtures of ingredients (clay, sand, minerals, water,etc.), colors, and textures. While some of the differences are merely decorative many of them are functional. And that is just the brick itself. Consider also all the possible ways of stacking them and all the kinds of cement that can be used to keep them together that took years and years of practical (live) tests.Wall_bonds

So before any first brick is laid not only the architect and builder have chosen which brick to use and how to use it. The manufacturer of the brick has also implemented design specifications, chose a mixture of ingredients, chose a way to manufacture it and had it tested for durability, fire proofing, insulation and structural integrity.

This is so for for most if not all materials used in building. Some of them may be more or less generic but all of them eventually have a specific usage in building a specific house.

Comparing building a house to software development

So how does building house then compare to software development. Well building a house basically can be seen as the sum of all individual material developments integrated into constructing a house. When looking at it in that way it is not very unlike software development of an application. In software development the individual components of the application are developed and eventually integrated into a full application.

Yet in practice building a house is not conceived as similar to software development. And that is not surprising. While in software development components are often developed as part of the same project in which the application is delivered. While in building the individual materials are most likely developed and manufactured separated both in place and time from the actual construction of the building.

Another similarity between building a house and developing software is that, contrary to the simplistic view, both are hardly ever right first time. In fact errors are so common in building that professions and consumer societies exist that are largely based on its shortcomings. Like software development, where there are software testers, auditors etc., building also has its inspectors, overseers etc..

A difference between software development and building a house is the visible duration. While building (and deploying) software can take a few minutes to up to a few days. Building a house takes a few days up to several weeks. On the other of side of the medal preparation of building a house is largely unseen. Which is quite the opposite to developing software. Getting up to a deployable build can last a few days to several months and sometimes up to several years.

Conclusion

I think the continued use of technical references and terminology in software testing spurs the comparison between software development and building. It seems to me that the remarks with which the post started find their basis in a incomplete vision. A vision that is based on contextual experiences. Working in an IT environment lets you see what a struggle software development is. And when your vision of house building comes from an owners perspective you are likely only to take the final construction as reference with all its supposed simplicity.

Even when the development processes leading up to construction is taken into account there still are significant differences in time, space and execution that outweigh the similarities.

TMap Day 2014

Arborosa:

Updated version after finishing the book

Originally posted on Arborosa:

On October 28, 2014 I visited Sogeti’s TMap Day.
This year’s focus was on their new book “Neil’s Quest for Quality”,
subtitled “A TMap© HD Story”.

This blog post describes my first impressions of that day.  When I have read the book I will either return to this post and adjust it or write a separate one on the books content.

Note: I have read the book and have adjusted the post.

The book is written as a novel. It contains “the TMap Human Driven story, consisting of a business novel, building blocks, Mr. Mikkel’s musings and contributions from the innovations board in testing”.

A quality-driven approach

The new TMap presents itself as a quality-driven approach that is captured in the TMap© Suite which consists of the following three parts:

  1. TMap Next
  2. TMap© HD
  3. Building Block described in the TMap HD book and gathered, maintained and extended on http://www.tmap.net

Inspired on Lean, Agile and DevOps

Both…

View original 765 more words

Non functional list of ideas

A while ago the lead business analyst in my project asked me if I could supply him with a list of non functional tests.
While aware that he probably meant if I could supply him with the non functional tests defined, executed and/or logged for our project I did not have a lot of time to search our test ware for them and I gave him the below list of definitions to digest as a starter.

I shared that list so the items on it can be used as a list to draw test ideas from. Since the initial publication I decided that this should be a living document. My first addition is based on the list of Software Quality Characteristics by Rikard Edgren, Henrik Emilson and Marin Janson (The testeye). I would like to continue to add definitions and possibly the supporting items that the Testeye added to theirs for each of them.

accessibility: usability of a product, service, environment or facility by people with the widest range of capabilities

accountability: degree to which the actions of an entity can be traced uniquely to the entity

adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified

availability: The degree to which a component or system is operational and accessible when required for use.

capacity: degree to which the maximum limits of a product or system parameter meet requirements

capability: Can the product perform valuable functions?

Completeness: all important functions wanted by end users are available.
Accuracy: any output or calculation in the product is correct and presented with significant digits.
Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
Interoperability: different features interact with each other in the best way.
Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
Data agnosticism: supports all possible data formats, and handles noise
Extensibility: ability for customers or 3rd parties to add features or change behavior.

change ability: The capability of the software product to enable specified modifications to be implemented.

charisma: Does the product have “it”?

Uniqueness: the product is distinguishable and has something no one else has.
Satisfaction: how do you feel after using the product?- Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?

compatibility: degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment

compatibility: How well does the product interact with software and environments?

Hardware Compatibility: the product can be used with applicable configurations of hardware components.
Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
Application Compatibility: the product, and its data, works with other applications customers are likely to use.
Configuration Compatibility: product’s ability to blend in with configurations of the environment.
Backward Compatibility: can the product do everything the last version could?
Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.

confidentiality: degree to which a product or system ensures that data are accessible only to those authorized to have access 

flexibility: the ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed

install ability: The capability of the software product to be installed in a specified environment

integrity: degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data

interoperability: The capability of the software product to interact with one or more specified components or systems

IT-ability: Is the product easy to install, maintain and support?

System requirements: ability to run on supported configurations, and handle different environments or missing components.
Installability: product can be installed on intended platforms with appropriate footprint.
Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
Uninstallation: are all files (except user’s or system files) and other resources should be removed when uninstalling?
Configuration: can the installation be configured in various ways or places to support customer’s usage?
Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
Maintainability: are the product and its artifacts easy to maintain and support for customers.
Testability: how effectively can the deployed product be tested by the customer?

learnability: The capability of the software product to enable the user to learn its application

localizability: How economical will it be to adapt the product for other places 

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment

maintainability: Can the product be maintained and extended at low cost?

Flexibility: the ability to change the product as required by customers.
Extensibility: will it be easy to add features in the future?
Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
Readability: the code is adequately documented and easy to read and understand.
Transparency: Is it easy to understand the underlying structures?- Modularity: the code is split into manageable pieces.
Refactorability: are you satisfied with the unit tests?
Analyzability: ability to find causes for defects or other code of interest.

modifiability: ease with which a system can be changed without introducing defects

modularity: degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components

non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability

operability: The capability of the software product to enable the user to operate and control it

performance: Is the product fast enough?

Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
Stress handling: how does the system cope when exceeding various limits?Responsiveness: the speed of which an action is (perceived as) performed.
Availability: the system is available for use when it should be.
Throughput: the products ability to process many, many things.
Endurance: can the product handle load for a long time?
Feedback: is the feedback from the system on user actions appropriate?
Scalability: how well does the product scale up, out or down?

pleasure: degree to which a user obtains pleasure from fulfilling personal needs

portability: The ease with which the software product can be transferred from one hardware or software environment to another

portability: Is transferring of the product to different environments and languages enabled?

Reusability: can parts of the product be re-used elsewhere?
Adaptability: is it easy to change the product to support a different environment?
Compatibility: does the product comply with common interfaces or official standards?
Internationalization: it is easy to translate the product.
Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
User Interface-robustness: will the product look equally good when translated?

recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure

reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations

reliability: Can you trust the product in many and difficult situations?

Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
Robustness: the product handles foreseen and unforeseen errors gracefully.
Recoverability: it is possible to recover and continue using the product after a fatal error.
Resource Usage: appropriate usage of memory, storage and other resources.
Data Integrity: all types of data remain intact throughout the product.
Safety: the product will not be part of damaging people or possessions.
Disaster Recovery: what if something really, really bad happens?
Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment

reusability: degree to which an asset can be used in more than one system, or in building other assets

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions

safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use

satisfaction: freedom from discomfort and positive attitudes towards the use of the product

scalability: The capability of the software product to be upgraded to accommodate increased loads

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data

security: Does the product protect against unwanted usage?

Authentication: the product’s identifications of the users.
Authorization: the product’s handling of what an authenticated user can see and do.
Privacy: ability to not disclose data that is protected to unauthorized users.
Security holes: product should not invite to social engineering vulnerabilities.
Secrecy: the product should under no circumstances disclose information about the underlying systems.
Invulnerability: ability to withstand penetration attempts.
Virus-free: product will not transport virus, or appear as one.
Piracy Resistance: no possibility to illegally copy and distribute the software or code.
Compliance: security standards the product adheres to.

stability: The capability of the software product to avoid unexpected effects from modifications in the software

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives

supportability: Can customer’ usage and problem be supported?

Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
Diagnostics: is it possible to find out details regarding customer situations?
Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
Debugging: can you observe the internal states of the software when needed?
Versatility: ability to use the product in more ways than it was originally designed for.

survivability: degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks

testability: The capability of the software product to enable modified software to be tested

testability: Is it easy to check and test the product?

Traceability: the product logs actions at appropriate levels and in usable format.
Controllability: ability to independently set states, objects or variables.
Isolateability: ability to test a part by itself.- Observability: ability to observe things that should be tested.
Monitorability: can the product give hints on what/how it is doing?- Stability: changes to the software are controlled, and not too frequent.
Automation: are there public or hidden programmatic interface that can be used?- Information: ability for testers to learn what needs to be learned…
Auditability: can the product and its creation be validated?

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests

usability: extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use

usability: Is the product easy to use?

Affordance: product invites to discover possibilities of the product.
Intuitiveness: it is easy to understand and explain what the product can do.
Minimalism: there is nothing redundant about the product’s content or appearance.
Learnability: it is fast and easy to learn how to use the product.
Memorability: once you have learnt how to do something you don’t forget it.
Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
Operability: an experienced user can perform common actions very fast.
Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
Control: the user should feel in control over the proceedings of the software.
Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
Consistency: behavior is the same throughout the product, and there is one look & feel.
Tailorability: default settings and behavior can be specified for flexibility.
Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
Documentation: there is a Help that helps, and matches the functionality.

Any additions to this list or comments on the definitions used are welcome.

SOURCES

  • HTTP://WWW.COMPUTER.ORG/SEVOCAB
    SEARCH ITEMS ISO 29119 AND IS0 25010
  • ISTQB GLOSSARY
  • CRUSSPIC STMPL (RST COURSEWARE)
  • Software Quality Characteristics and Internal Software Quality Characteristics by Rikard Edgren, Henrik Emilson and Martin Jansson in The little black book on test design

 

TMap Day 2014

On October 28, 2014 I visited Sogeti’s TMap Day.
This year’s focus was on their new book “Neil’s Quest for Quality”,
subtitled “A TMap© HD Story”.

This blog post describes my first impressions of that day.  When I have read the book I will either return to this post and adjust it or write a separate one on the books content.

Note: I have read the book and have adjusted the post.

The book is written as a novel. It contains “the TMap Human Driven story, consisting of a business novel, building blocks, Mr. Mikkel’s musings and contributions from the innovations board in testing”.

A quality-driven approach

The new TMap presents itself as a quality-driven approach that is captured in the TMap© Suite which consists of the following three parts:

  1. TMap Next
  2. TMap© HD
  3. Building Block described in the TMap HD book and gathered, maintained and extended on http://www.tmap.net

Inspired on Lean, Agile and DevOps

Both authors explained their contribution to the book and expressed that the elements (more detail later) are mostly inspired on the market move towards Agile and DevOps and the whole is based on a Lean approach to software development.

Aldert Boersma, one of the authors, positioned quality-driven as follows.

Position_TMapHD

The approach described in TMap HD© distinguishes five basic elements:

TMapHD_Elements

The (short) TMap HD descriptions for these concepts are:
People
Only people realize moving from “Testing according to TMap” to “Testing with TMap”.People with a broad knowledge of quality and testing that is.

Simplify
Make things as simple as possible – but not more simple than that.
Start small and work from there. A complicated process will only lose focus on the result.

Integrate
Integration with respect to testing denotes to a shared way of working, with a shared responsibility for quality. Testing is not a stand alone process.

Industrialize
Industrialization is important in improving testing and optimizing quality. Test tools are used to test more, more often, and faster.

Confidence
The goal of TMap HD© is providing confidence in IT solutions through a quality-driven approach. Confidence is the fifth element over and above all others.

Building Blocks

The first four elements should help you choose and apply building blocks that give (build) confidence. There is no prescribed set of blocks to use and main blocks themselves are seen as larger blocks of which smaller parts are chosen.

The now available building blocks are:

  • Test manager
    In the Test manager presentation the remark was made that fewer people will be TM but the activities will remain
  • Test manager in traditional
  • Assignment
  • Test organization
  • Test plan
  • Product risk analysis
    Oddly during the test management presentation and in the book this was called
    Product Risk & Benefit Analysis but that is not part of the website (yet)
  • Test strategy
  • Performance testing
  • Test approaches
  • Crowd-testing
  • Test varieties
    This replaces the current Test Levels and Test Types.
    They are divided in Experience based and Coverage based
  • Test manager in agile
  • Permanent test organisation
  • Model-based testing
  • Quality policy
  • Using test tools
  • Quality-driven characteristics
  • Integrated test organization
  • Reviewing requirements

As a kind of closing motto for that morning the following phrase was handed as a summary for test managers “Do not report trouble but offer choices for the client”.

So…

The TMap Day and the book have left me with rather distinct, and slightly contradicting, impressions.

A move in the right direction

Sogeti has embraced the fact that software development and with it software testing has changed over the last decades. The rise of Agile and Lean on the one side and the decline of Waterfall hasn’t gone unnoticed and the new brand certainly addresses these developments. There is also an influence that Sogeti has carefully tried to avoid in mentioning but that I believe is clearly present. That influence is Context-Driven Testing. In spite of naming it environment, circumstance or situation TMap HD shares the principle that based on the context software testing is and should be different and use that what best suites the context.

Criticism

Ever since TMap, and TMap Next and particularly since their training and certification program appeared there has been a lot of criticism. This criticism especially focusses on the rigid factory school view on software and the limited value of TMap certification. While Sogeti itself did not react to this much many of the authors, most of them no longer working for Sogeti, did. The common denominator in their response was that the content was misunderstood and it was never meant to be followed by the letter.

Next to a wider interest for and influence of new software development approaches the emergence of building blocks shows that parts of the criticism is taken to heart and that TMap should and can now be used more flexible.

Superficial

In the Netherlands we have a saying “Oude wijn in nieuwe zakken” (litterally Old wine in new wineskins) expressing that although it looks new it’s still the same old stuff. I believe this applies also TMap HD. Even with influence of Agile and Lean and the introduction of Building Blocks in the book I am still left the feeling that beneath the surface the nature of the solutions is still the same as before. This feeling is enhanced by the fact that TMap Next is still declared to be the core of the testing approach and that all existing training courses and certificates remain. Especially that last part has led to the rigid and limited testing approach that many Dutch testers employ.

So while in theory there is hope for positive change I fear that in reality nothing much will change for the better.

The CDT Brigade

On August 26, 2014 @perftestman posted the following tweet:
“I think a lot of the CDT brigade just like the sound of they’re  own voice –
everything’s CONTEXT driven! ”

It was her reaction on my earlier tweet, that got some attention:
Dear @pt_wire I use effective, documented systematic testing processes and methods. But not generic one size fits all. #CDT vs #ISO29119

For a while both James Bach and I reacted to it but the limitation of 140 characters, twittering on a mobile phone and not the least work made me stop engaging in the twitter feed. But it wasn’t the first time that people resort to this kind of fallacy thus avoiding discussing the actual content. This made think about it and in this post I will share my little thought exercise.

Addressing the first part of her tweet an answer to what is a brigade?

  • A group of people who have the same beliefs
    I wouldn’t compare CDT to a belief system but one cannot deny that the CDT community shares values, talks about it, sometimes feels strongly about them and expresses them out in the open. This part doesn’t apply specifically to CDT as e.g. the ISTQB brigade has similar behavior.
    I also do not think that the CDT community is out to convert people to be context-driven testers. To convince by pointing out alternatives and different approaches -yes, answer questions – yes, share knowledge – yes, share experiences – yes, but testers are allowed to decide for them selves how and if they want to use it.
  • A group of people organized to act together
    The CDT community certainly regularly confers, meets each other (live or online) and challenges each other. They are however not organized as a single group or organization nor do I think they want to be. They are mostly independently and critical minds that have discovered that taking into account and using the context helps them to deliver more value and that building skill, acquiring and sharing knowledge and experience with others helps them to get better at doing that.
  • A large group of soldiers that is part of an army
    I fear that @perftestman intended to make use of this definition. This comparison however lacks credibility. The CDT community might seem to go battle figuratively over some subject and if feeling strongly about engaging in fierce discussion. And unfortunately occasionally a few of them even lash out to individuals that oppose CDT values or cannot handle the challenging style of discussion. But even if we form a community we are not so organized that we form a specific group of sorts nor do we intend to destruct or conquer. The community, essentially is a collection of likeminded professionals who rather aim to convince with facts, ideas and experience with the intend to advance the craft.

In a follow up post I will address an earlier tweet: “Too many of these so called test guru are impostors – detached from the realities of software development and #lovethesoundofyourownvoice” and compare the contributors to ISO 29119 to the CDT brigade and thus attempt to discover who are meant to be the potential impostors in this tweet and other occasions that spawned remarks like this.

 

Following the news – Code inspection is 80 percent faster than testing

During the CAST 2014 conference in New York I participated in a workshop by Laurent Bossavit and Michael Bolton called “Thinking critically about numbers – Defense against the dark arts”. Inspired by this workshop I took a look at one of the Dutch sites addressing news about software testing www. testnieuws.nl. This is the second post to come out my curiosity.

On May 23, 2014 Testnieuws hosted an article “Code inspectie is 80 procent sneller dan testen” (translated Code inspection is 80 percent faster than testing). The article itself provides little more substantiation for the claim than a reference to research by IfSQ. Both the claim and usage of this as header seems to only serve to grab the readers attention. The article ends with an invitation to read more about it and this leads to what I think is the actual article “Status ICT-projecten vaak compleet onduidelijk” (translated “Status of ICT projects often completely unclear). This article describes that, especially government, projects need to have more objective information and they need to get it earlier. This way it is possible to determine the status of a project. Andres Ramirez, Managing Partner of the OSQR Group states that “better software leads to better projects” and “better source code leads to better software” and “the quality of source code can be objectively assessed by the guidelines from the Institute for Software Quality”. The last quote explains the IfSQ abbreviation used earlier.

A little further in the article the claim is used and even extended “Code inspection is 80 percent faster than testing, and finding and repairing code is much cheaper than testing”. Ramirez also adds “Research by IfSQ shows that regular code inspection during the production process ensures that software can be changed more easily. Inspected software is 90 percent cheaper to maintain.”. I am choosing to ignore these last claims for now and proceed to IfSQ the look into their research.

The IfSQ – Research Findings Relevant to the IfSQ Standards hosts about 50 or so reference to articles and research results divided into sections “Why should you inspect software?”, “When should you inspect software?” and “What should you look for?”. Noticeably the focus is strongly on code quality and, to my opinion, therefore not really on software quality as such. Also there seems to be need to position code inspection opposite to testing as suggested by titles like:

The second title points to a page with the title “Inspection is 80% faster than testing” which indicates I am on the right track. The page however only repeats “Code reading detected about 80% more faults per hour than testing.” and provides two, non IfSQ, sources for it without further argumentation. The sources are:

So, at least in this case, so called research findings by IfSQ do not point to research executed by IfSQ themselves, nor were they involved as both sources are quite old and IfSQ was established much later in 2005. Next step to identify which of the two sources holds the quote.

The first article was easily found. In summary the article describes a scientific study that applies an experimentation methodology to compare three (then) state-of-the-practice testing techniques: a) code reading by stepwise abstraction b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage. It compares three aspects of software testing: fault detection effectiveness, fault detection costs and classes of faults detected. It focussed on unit testing code using a limited set of specific programs, known errors and a mix of academics and professional developers.

Although it found difference between the three test techniques with in some instances an identifiable hierarchy of code reading, functional testing and structural testing non of the results came anywhere near the claim of being 80% faster. So my conclusion is that this article cannot be a valid source for this claim.

I could only find the second at IEEE and as a result the article could only be read by buying it. Setting aside my initial dislike of paying for information, especially if it so old, I tried to buy it. Unfortunately the cash module did not like my dutch creditcard. As a result I stuck to a number (4) abstracts and a course summary of where the article was used.

The second article came closer to the IfSQ description of code inspection in describing how its done, what is needed for it and what it can measure. Still none of the abstracts said anything about being faster than testing. They did mention percentages around 80% for defects found by software inspections. This to me is a different claim. Sounds to me that a ‘leaky’ inference was made and worse another attempt to gain credibility by bringing testing in disrepute.