Test Types – D, E, F

Test Type
A particular type of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

This is the third post in a (sub) series on Test Types. This post covers test types beginning with D, E and F. Please add any additions or remarks in the comment section.

Data integrity testing
Data integrity testing focusses on the verification and validation that the data within an application and its databases remains accurate, consistent and retained during its lifecycle while it is processed (CRUD), retrieved and stored.
I fear this is an area of testing that is often overlooked. While it gets some attention when functionality is tested initially, the attention on the behavior of data drops over time.

Dependency testing
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Destructive testing
Test to determine the softwares or its individual components or features point of failure when put under stress.
This seems very similar to Load Testing but I like the emphasis on individual stress points.

Development testing
It is an existing term with its own Wikipedia page but it doesn’t bring anything useful to software testing as such.

Documentation testing
Testing of documented information, definitions, requirements, procedures, results, logging, test cases, etc.

Dynamic testing
Testing the dynamic behavior of the software.
Almost all testing falls under this definition. So in practice a more specific identification of a test type should be chosen.

End-to-End testing
Testing the workflow of a single system or a chain of systems with regard to its inputs, outputs and processing  of these with regard to its availability, capacity, compatibility, connectivity,  continuity, interoperability, modularity, performance, reliability, robustness, scalability, security, supportability and traceability.
While in theory end-to-end testing seems simple. Enter some data and check that it is processed and handed over throughout all systems until the end of the chain. In practice end-to-end testing is very difficult. The long list of quality characteristics mentioned above serves as an indication of what could go wrong along the way.

Endurance testing
Endurance is testing the ability to handle continuous load under normal conditions and under difficult/unpleasant conditions over some longer period of duration/time.

Error handling testing
Use specific input or behavior to generate known, and possibly unknown, errors.
Documented error and of exception handling is a great source to use for test investigations. It shows often undocumented requirements and business logic. It also interesting to see if the exception and errors occur based upon the described situation.

Error guessing
Error guessing is based on the idea that experienced, intuitive or skillful tester are able to find bugs based on their abilities and that it can be used next the use of more formal techniques.
As such one could argue that it is not as much a test type as it is an approach to testing. 

Exploratory testing
Exploratory testing is a way of learning about and investigating a system through concurrent design, execution, evaluating, re-design and reporting of tests with the aim to find answers to currently known and currently as yet unknown question who’s answers enable individual stakeholders to take decisions about the system.
Exploratory testing is an inherently structural approach to testing that seeks to go beyond the obvious requirements and uses heuristics and oracles to determine test coverage areas and test ideas and to determine the (relative) value of test results. Exploratory testing is often executed on the basis of Session Based Test Management using charter based and time limited sessions.
It is noteworthy that, in theory at least, all of the test types mentioned in this series could be part of exploratory testing if deemed appropriate to use. 

Failover testing
Failover testing investigates the systems ability to successfully failover, recover or re-allocate resources from hardware, software or network malfunction such that no data is lost, data integrity is intact and no ongoing transactions fail.

Fault injection testing
Fault injection testing is a method in which hardware faults, compile time faults or runtime faults are ‘injected’ into the system to validate its robustness.

Functional testing
Functional testing is testing aimed to verify that the system functions according to its requirements.
There are many definitions of functional testing and the one above seems to capture most. Interestingly some definitions hint that testing should also aim at covering boundaries and failure paths even if not specifically mentioned in the requirements. Other mention design specifications, or written specifications. For me functional testing initially is conform the published requirements and than to investigate in which way this conformity could be broken. 

Fuzz testing
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks.
Yes this is more or less the Wikipedia definition. 

Test Types – A

This sixth post in the series of software testing overviews introduces the first of some 80+ different test types. That number itself is completely arbitrary. While searching for and investigating testing definitions I found over a hundred definitions of test types and I have chosen to leave out a number of ‘test types’. My choice to do so is based on the interpretation that some test types rather described a test level or a test technique and I could not see how to make a useful test type out of them.

So what then do I call a test type?

To me a test type is a particular subject of testing that has an approach, goal and/or use of oracle(s) that provides information that is typical to that test type.

While going through my overview you might find that some of the test types I mention do not entirely fit the narrow description of a test type as provided above. The reason that they are mentioned in spite of this is that I felt that they were so often mentioned as a test type that they should have a mention in this post just for that reason.

This post differs somewhat from the earlier posts as the definitions used are often rewritten by me to form a single aggregate definiton as many different ones for the same term exist. Where useful I have added comments as additional information. Also since a post with over 80 descriptions would be too long I have split up the overview into alphabetical sections. To begin with the letter A.

Just as a reminder these are not necessarily my definitions but a collection of definitions I encountered. Finally there are no sources or attributions for the individual test types as this would make this a totally different exercise.

A/B testing

A/B testing originates from marketing research used to investigate the more efficient of two possible solutions by presenting them to two different groups and measuring the ‘profits’. In software testing it is mostly used to compare two versions of a program or website, often of which only one contains changes on a single or a few controllable criteria.

Acceptance testing
During acceptance testing requirements, variables, parts of a program or specific behaviour of a program is compared against measurable aspects of predetermined acceptance criteria. This requires at least four things.
First identification of the requirements, variables and parts or behaviour (coverage). Second expressing these in measurable aspects.
Third the aspects need to represent the defining elements of the acceptance criteria. Finally the acceptance criteria themselves should represent the needs and wants of the stakeholder. The goal of this interpretation of acceptance testing is to provide stakeholders the possibility to accept the software. And provide the possibility of sign-of.

A more pragmatic way to look at acceptance testing is to allow the stakeholders, often end users, to evaluate the software and see if it meets there expectations and (operational) needs. In practice this is often done by proxy by stakeholder representatives. Sometimes the testers are selected as the representatives. I do not think that is a good idea as testers then step outside of there boundary of information provider, are often commited to the solution and their role in creating it and more essentially testers are not the end users. 

Active testing
Testing the program by triggering actions and events in the program and studying the results. To be honest I do not consider this a test type as in my opinion this describes nearly all types of testing.

Ad-hoc testing

Ad-hoc testing is software testing performed without explicit prior planning or documentation on direction of the test, on how to test or on which oracles to use.
Some definitions see this as informal, unstructured and not reproducible. Informality and being unstructured (if seen as unprepared) is certainly true as this would be the point of doing it ad-hoc. The not reproducible part depends on whether you care to record the test progress and test results. Something that is in my opinion is not inherently attached to doing something ad-hoc but highly advisable. Why elso would you be testing if you are able to tell the testing story.

Age testing
It is a testing technique that evaluates a system’s ability to perform in the future. As the system gets older, how significantly the performance might drop is what is being measured in Age Testing.
To be honest I found only one reference to this test type but I find the idea interesting. 

Agile testing
Agile testing is mentioned often as a test type or test approach but I have added no definition or description here. In my opinion agile testing is not a software test type. Agile testing rather is a particular context in which testing is performed that may have its particular challenges on test execution, on how tests are approached and on choices of test tooling but not a specific test type. 

Alpha testing
Alpha testing is an in-house (full) integration test of the near complete product that is executed by others than the development team but still is executed in a development environment. Alpha testing simulates the products intended use and helps catch design flaws and operational bugs.
One could argue that this is more of a test level than a test type. I care to view it as test type because it is more about the type of use and its potential to discover new information than that it is part of the software development itself. I specifically disagree with the idea that this is an intermediary step towards, or is part of, handing over software to a Test/QA group as some definitions propose. In my opinion testing is integrated right from the start of development up until it stops because the product ends its life-cycle or due to some other stopping heuristic. 

API testing
API testing involves testing individual or combined inputs, outputs and business rules of the API under investigation.
Essentially an API is a device-independent, or component-independent access provider that receives, interprets/transforms and sends messages so that different parts of a computer or programs can use each other’s operations and/or information. Testing an API is similar to testing in general albeit that an API has a smaller scope has, or should have, specific contracts and definitions that describe the API’s specific variables, value ranges and (business) rules. Testing an API should however not be limited to the API alone. Sources, destinations (end-points), web services (e.g. REST, SOAP), message types (e.g. JSON, XML), message formats (e.g. SWIFT, FIX, EDI, CSV), transport- (e.g. HTTP(S), JMS, MQ)  and communication protocols (e.g. TCP/IP, SMTP, MQTT, TIBCO Rendezvous) all influence the overall possibilities and functionality of the API in relation to the system(s) that use(s) the API. Typically API testing is semi- or fully automated and requires sufficient tool, message type, and transport- and communication protocol knowledge to be executed well.

An alternative meaning of API testing is testing by using an API. In this case the API is a means to an end in gaining access to the subject under test and feeding it with data or instructions and gathering responses.

Regression Testing

As a follow up in the testing definition series it was my intention to continue with covering Test Types. Initial investigation showed what I had already feared. Such a post would become a Herculean task and probable my longest post ever. So I will continue with that particular endeavor sometime later tackling it one step at a time. This post for starters covers one of the most common but also one of the most peculiar types of testing

“Regression Testing”

Regression Testing is so common as a testing type that the majority of books about software testing, and agile for that matter, that I know, mention regression testing. Almost as common however is that most of them either or both do not tell what regression testing is or do not tell how one should actually go about and do regression testing. To be fair an exception to the latter is that quite a few, particularly the ones with an agile demeanor, tell that regression testing is done by having automated tests but that is hardly anymore informative is it.

Before I go further into regression testing as being peculiar first inline with the previous posts a list of regression testing definitions:

  • Checking that what has been corrected still works. (Bertrand Meyer; Seven Principles of Software Testing 2008)
  • Regression testing involves reuse of the same tests, so you can retest (with these) after change. (Cem Kaner, James Bach, Bret Pettichord; Lessons learned in Software Testing 2002)
  • Regression testing is done to make sure that a fix does what it’s supposed to do (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)
  • Regression testing is the probably selective retesting of an application or system that has been modified to insure that no previously working components, functions, or features fail as a result of the repairs. (John E. Bentley; Software Testing Fundamentals Concepts, Roles, and Terminology 2005)
  • Retesting to detect faults introduced by modification (ISO/IEC/IEEE 24765:2010)
  • Saving test cases and running them again after changes to other components of the program (Glenford J. Myers; The art of software testing 2nd Edition 2004)
  • Selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements (ISO/IEC/IEEE 24765:2010)
  • Testing following modifications to a test item or to its operational environment, to identify whether regression failures occur (ISO/IEC/IEEE 29119-1:2013)
  • Testing if what was tested before still works (Egbert Bouman; SmarTEST 2008)
  • Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. (Standard glossary of terms used in Software Testing Version 2.2, 2012)
  • Testing required to determine that a change to a system component has not adversely affected functionality, reliability or performance and has not introduced additional defects (ISO/IEC 90003:2014)
  • Tests to make sure that the change didn’t disturb anything else. Test the overall integrity of the program. (Cem Kaner, Jack Falk, Hung Quoc Nguyen; Testing Computer Software 2006)

Looking at the above definitions the general idea about regression testing seems to be:

“To ensure that except for the parts of the areas* that were intentionally changed no other parts of these areas or other areas of the software are impacted by those changes and that these still function and behave as before”.
(*Area is used here as a general expression for function, feature, component, or any other dimensional divisions of the subject under test that is used)

The peculiar thing now is that however useful and logical such a definition is it only provides the intention of this type, or should I say activity, of testing. Regression testing could still encompass any other testing type in practice.

To know what to do you first need to establish which areas are knowingly affected by the changes and then which areas have the most likelihood of being unknowingly affected by the change. Next to that there probably are areas in your software where you do not want to take the risk of them being affected by the changes. In his presentation at EuroSTAR in 2005 Peter Zimmerer addresses the consequences of this in his test design poster by pointing out that the wider you throw out your net for regression effects the larger the effort will be:

  • Parts which have been changed – 1
  • Parts which are influenced by the change – 2
  • Risky, high priority, critical parts – 3
  • Parts which are often used – 4
  • All – 5

Once you have identified the areas you want to regression test you still need to figure out how to test those areas for the potential impact of the change. The general idea to solve this, in theory at least, seems to be to rerun previous tests that cover these areas. As this might mean running numerous tests for lengthy periods of time many books and articles propose to run automated tests. This will however only work if there are automated tests to use for testing these areas to begin with. And even if there are you still need to evaluate the results of any failed test and there is no clear indication of how long that may take.

How do you know that these existing tests do test for the impact of the change? After all they were not designed to do so. For all you know they might or might not fail due to changes to the area that is tested by them. Either result could therefore be right or wrong in light of the changes. The test itself could be influenced by an impact of the change on the test (positive or negative) that was not considered or identified yet.

All in all regression testing is easily considered to be necessary, not so easy to determine, difficult to evaluate on success and considerably more work then many people think. Even so next to writing new tests it probably is the best to solution to check if changes bring about unwanted functionality or behavior in your software. My suggestion to you is to at least change the test data so that these existing tests have a better chance of finding new bugs.

250 hours of practice – February/March

It has been a while since my last post. It was not that I have been procrastinating. No I actually was too busy to write a post. There were many reasons for this and most of them are covered in this post. So lets start with the biggest time consumer.

BBST Foundation

For those who do not know what this is I can almost hear you thinking.

“A foundation course? Doesn’t this guy have enough experience already and now he does Black Box Software Testing Foundation course. ”

Well let me put it this way. There are foundation courses and foundation courses. The Black Box Software Testing (BBST) Foundation course is as it says a foundation for further education and it is a good place to start your testing career. It’s just that it has a distinctive approach to it. First of all it is a four-week online course. Which is longer then almost any other course on software testing. Second it focusses on getting the participants to become critical and thinking software testers. It does this not by handing you any predefined recipe. It gives you the ingredients and the possibility to work with them. Let me, as an example, give you a short overview of the content:

Basic definitions                                              Programming fundamentals and Coverage
What is a computer program                               Numbering
Types of testing                                                   Storage
.                                                                           Representation
Strategy                                                               Data and Control structures
What is testing                                                     Coverage
Testing missions
Testing strategy                                              The impossibility of complete testing
                                                                          The basic combination rule
Oracles                                                                Paths and sub paths
System under Test                                              Data flows
Heuristics                                                             Sequences
Consistency oracles
More types of oracles                                       Measurement

All captured in :

  • 20+ articles and several books to read
  • 306 slides
  • Over 3 hours of video lecture
  • 5 quizzes
  • multiple individual and group assignments
  • Exam and exam grading
  • Over 70 hours of thinking, preparing and answering either individually or with team members from multiple time zones

Given that I also have worked these four weeks and maintained a family life I have made long days in February. But it was all worth it. Even if you have, like me, some experience in software testing it is a splendid (re)sharpener of your critical thinking skills with regard to software testing. Not only with the content of the course itself, but also with the interaction you have with the twenty or so other participants. Especially the group assignments and the assignments that need reviewing each others work give you a good insight in cultural and thinking differences people have. And from this and from the comments you can get a lot of more wisdom.

So some seventy of so hours spent on BBST Foundation. Well on my way towards the end goal of two hundred and fifty. But you might have noticed that this covers only February and this post is written half way March. So what more have I done.

Proposals

The end of February and beginning of March was also the time in which several deadline for calls for papers came in to view. I do not know if this really is considered practice, but it gets you to think and write about software testing anyway. Within one week I entered five proposals for a track and one for a tutorial for EuroSTAR, TestNet spring and autumn event and the Agile Testing Days. And I already am more than happy to tell you that the tutorial on the use of mind maps in testing, that I thought of together with testing buddy Huib Schoots, already got accepted.

But not only did I spent time on my own proposals. Zeger van Hese, this years program chair, invited me to help review some of the many proposals that EuroSTAR has gotten this year. And even if the amount of, anonimized, text is not so much I did want to do a serious review and evaluation of the proposals. (Like I would like others do with mine.) Some of them were good, some were bad and for some I was indecisive.

TestNet book

And there was more time spent reviewing. A couple of months ago I had committed to reviewing the TestNet jubilee book on the future of software testing. Obviously at the time I had not imagined to be this busy. But at the cost of some sleep I managed to finish the review prior to my next challenge.

The book itself is a great reference to over think where testing is going to and what choices you as tester need to make to make yourself both comfortable and future proof the coming five years.

Conferences

Although I have given talks and workshops before I had never been a speaker at a major international test conference. March 14 I made my debut on the international stage at the Belgium Testing Days. As I will be spending a separate post on the content of my talk I will limit for now with the remark that it was great fun to do and that it was great to meet both familiar and new  faces.

Meanwhile during this period Markus Gärtner published an interview with me on his blog as prequel to Europe’s first context-driven testing conference “Let’s Test“. The only downside for me seems to be that I am actually not able to attend, due to a lack of funds.

To complete the conference experience for the coming period I am invited to be a test expert at the Dutch Testing Conference (agile, context-driven testing and exploratory testing) and I invite participants to ask me questions during the conference.

#AgileTD (2)

Agile Testing Days

From 14 to 17 November 2011 The Agile Testing Days took place in Potsdam (D). Here is my second impression of my visit there.

Preparation

I am an advocate of being well prepared when going to a conference. This enables me to  make informed choices of which tracks I really want to follow or not. This time I added two additional things to my preparation. First I did a poll on Twitter and LinkedIn. I wanted to know who else would be going to the conference and at what time they would arrive in Potsdam. I myself would be arriving on Sunday and wanted to meet other conference attendees. It always surprises my how well this works. On Sunday I met with Lisa Crispin and went out to have dinner with her and eight other testers, of which four were conference speakers. The second part of the preparation was more pragmatic. I wrote a number of blog posts an agile basics, gave a tutorial and a workshop at TestNet.

Surprise & Tutorial

Monday started with a quick breakfast and registration for the conference at which I got a bag with conference information, some tourist information (incl. a bottle of local beer) and also the program for Testing & Finance. A few months earlier I had entered a proposal for that conference and since I hadn’t gotten a reaction I was curious to see who, if not me, were on the program. To my surprise I found myself having a track at the end of day 1. A brief check learned that they had tried to reach me the day before at my home e-mail address that had not checked since I left home.

The rest of the Monday was filled with a one-day version of Jurgen Appelo’s Management 3.0 course. This course I can only wholeheartedly recommend to both managers and testers alike. Besides the necessary theory, the course also is punctuated with reading tips and practical exercises. My personal take-away from this tutorial is the use of complexity theory and the notion that, despite of the fact that all models are fallible, several weaker models can be as effective and as a strong model, and certainly no better model. I will dig into that at some time soon and combine that with Jerry Weinberg’s books on Systems Thinking.

Some of the highlights of the track days

Keynote by Johanna Rothman – “Agile Testers and test managers”
In the keynote, the changing roles of testers and test managers were discussed. For example, testers will need to cooperate more intensively with developers. Test managers should be leaders in the organization and pursue the following key activities: Monitoring of the project portfolio; Removing organizational obstructions, Create confidential relationships, Leading the hiring process, Increasing the capacity of the organization and finally the Start up communities of practice. I liked the keynote enough to be following her tutorial at the Belgium Testing Days in March 2012.

Track by David Evans – “What testers and developers can learn from each other”
This track showed that testers and developers, while working on the same product, see this with a different perspective. Testers often seem more capable of changing perspective. By being able to do so testers can learn developers that there are different kinds of tests. A good model for showing this is the “Agile Testing Quadrants” as defined by Crispin and Gregory from the book Agile Testing. But I a will keep further description short as you can see the whole presentation on YouTube at, http://skillsmatter.com/podcast/agile-scrum/what-testers-and-developers-can-learn-from-each-other

Track by Rob Lambert – “Do agile testers have wider awareness fields”
This track went in to the need to be aware of your context and to use this awareness to your benefit as a tester. Perhaps it is the process or maybe it is the people? Or is the awareness field of agile testers is not wider at all? Agile testers however seem to display a higher ability to feel, to perceive, to know and to be aware of themselves and the world around them. Traditional testers often seem to have a more limited consciousness in terms of testing, and development roles. Even if it is wider, it is often less versatile than in an agile environment. There is however a distinction between social (situational) awareness and personal awareness. One reason for the difference in perspective among other things, is that the focus of testing in a traditional development environment is narrower than the focus of testing in an agile environment. A greater awareness and a broader focus, often leads to an increase in choices. This allows you to choose one of the possible paths instead of chosing a prescribed path. A greater awareness is also a first step on the path of change. However you can not follow all paths. It is necessary to have sufficient self-knowledge and to know your limits. More on this in this prezo.

Track by Huib Schoots – “So you think you can test”
Huib actually wasn’t on the initial program of the Agile Testing Days. But a few of the presenters were ill and since Huib happened to have his laptop with this presentation on it at the venue he offered to pitch in. The organization graciously  accepted his offer and Huib made his first appearance at an international conference. As the title suggests his talk was on what makes a good tester and how to become one. I really enjoyed his talk but rather than to describe it here I am going to point out a series of columns on this Huib is writing.

Keynote by Liz Keogh – “Haiku, Hypnosis and Discovery: How the mind makes models”
Liz put an extraordinary exercise into her keynote. She let the audience pair up to create Haiku’s. Together with Johanna Rothman and I came up with the following sentences that we combined to the following Haiku:

Foggy breath
An agile journey
Bright blue burst over the rocks

Or a more famous one from Matsuo Basho:
Furuike ya                  Old pond
Kawazu Tobikomu     Frog jumps in
Mizu no oto                The sound of water

Liz continued with a hypnosis session to explain that concentrated and focussed attention on positive experiences can bring a state of mind that widens perception and activates the ability to see patterns and models. Huib Schoots volunteered to go on stage and be hypnotized. It was impressive to see how more than half of the audience participate and was elevated by the experience.

Keynote van Gojko Adzic – “5 key challenges for agile testers tomorrow”
Gojko concluded the track days with an inspiring keynote talk on five challenges agile testers are facing:

#1 Shorter delivery phases
#2 Agile is now main stream
#3 Faster feedback
#4 Large “enterprise” projects
#5 Validating business, not software

His final message was to adopt principles, adapt practices, teach each other how to test, help business to define and validate actionable metrics, visualize risk value areas and to draw up contexts to inform testing. This definitively struck a chord with me. As I am working at a large enterprise transitioning to agile. I can fully understand that the energetic and all present Gojko won the MIATPP Award 2011 as “The most influential Agile Testing Professional Person 2011”

Final day

The final day was a series of parallel sessions with Open Space, Coding Dojo, Testing Dojo and last but certainly not least TestLab. To be honest I was both to actively involved and tired after the previous days to take sufficient notes. But what I can share with you that these possibilities to actively use what you have learned, to spar with your peers and to be coached by the organizers and speakers that are there makes this part of the conference potentially the most valuable part.

A journey to #agiletd (1)

Agile Testing Days 2011 – Potsdam

In October I started a series of posts on agile. For me there were three reasons to start writing those posts. First, I worked in an agile environment, second, I felt there had to be more to agile than its most commonly mentioned method SCRUM and third it was a way of preparing myself to go to the Agile Testing Days. Now that I have returned from the conference I would like to share my experiences with you in several posts. I am going to use the discussion  with Huib Schoots about going to conferences as a starting point to describe the social aspect of going to a conference. Other posts will go deeper into the content when I have digested the information bombardment.

Why should testers attend conferences?
My argument at the time  was: “Conferences typically are the place where you can learn the latest developments and opinions, submerge yourself into the testing mindset, confer with your peers, refresh your ideas and expand your network”.

Well at the Agile Testing Days this was absolutely true. But, and this is something I will have to be adamant about, this does not happen automatically. There are a few conditions to consider. Preparation You need to prepare yourself; for instance by knowing who the speakers are and what their subjects are. And not only to determine to which talks you want to go but also to ask yourself if it would be interesting to talk and discuss with them about it. Being Approachable Most of the speakers and delegates, as I have experienced, are very approachable and like to talk to you about almost anything. A conference can be so much better if you are open to this yourself and are courageous enough to step up to others and start a conversation. Look beyond the program Conferences, typically those that host different nationalities of speakers and delegates, do not stop when the talks are finished. Get together with the people you meet. Go out and have dinner with them, or get a drink at the bar. Why would you lock yourself up in your hotel room. A conference is not like a class room where you enter at a scheduled time and leave once class is over. Enjoy Go and talk about what you have on your mind. It does not even have to be about anything from the conference or testing even. There is great stuff to learn, great people to meet and lots of fun to have. And even if you think you have nothing to talk about there is a lot to gain by listening and watching the interaction. But I am pretty sure once you are there conversations will happen.

So what did I do?

Having said all of the above you might question how I fared myself. Well I started with inquiring who else, other than my colleagues (Frank Pellens, Huib Schoots, George Stevens and Robert Copoolse), was going to go the Agile Testing Days by sending out a few tweets on this matter. As it turned out there was a division between either the Agile Testing Days and with EuroSTAR within my followers. After some conversation Lisa Crispin and I agreed to meet on the Sunday evening before the conference. Now having set a date others would be able to join in. We ended up having a very enjoyable and entertaining dinner at Petite Pauline with Tamara Taxis, Liz Keogh  (Picture: Liz folding origami animals from Euro bills), David Evans, Stephan Kämper, Huib Schoots, Bob and Lisa Crispin and myself. Back at the hotel we went for another drink at the bar and found that several people that we as a group knew, like Michael Bolton, were to be found there also. So even before the conference had started I was meeting new people, talking to them and started a rolling snowball that would keep on growing during the rest of the conference.

Now that I had made contact and kept an open spirit I found myself getting to know lots of new and interesting people during the conference. Additionally I reconnected with people who I had met before and all of them added to my story of these Agile Testing Days. A story that enriched me and let me have much more content, context, depth and interactivity during the conference than when I had only gone there to listen.

PaTS

One of the other highlights was something Huib Schoots and I organized. Having heard about lightning talks and rebel alliances at other conferences we kind of felt the Agile Testing Days should have something similar. And if it were to happen we wanted to be part of it. So what better way to ensure that than to organise one ourselves. We contacted the guys from Diaz-Hilterscheid and after some explanation we were allowed to rent a room at the venue. Shortly after we made an initial selection of people we would like to meet and that we knew were coming to the Agile Testing Days. In that mail we called our gathering the Potsdam agile Testers Session or PaTS. We planned to start with the people who reacted positively on our mail and would see who else would like to join us whilst in Potsdam. On the third day of the conference we (Rob Lambert; Rob van Steenbergen; Daniel Lang; Janet Gregory; Simon Morley; Brett L. Schuchert; James Lyndsay; Stevan Zivanovic; Jim Holmes; Bart Knaack; Lisa Crispin; Olaf Lewitz; Mike Scot; Jurgen Appelo; Thomas PonnetCecile Davis; Michael Bolton; Huib Schoots and myself)  got together in the TestLab, ordered some beer and pizza and started talking.

We started by making up a prioritized list of subjects of which we did the following:

    • What makes a good tester (Nice post on this by Olaf Lewitz); Quote by Michael Bolton: “To see complexity in apparent simple things And to see simplicity in apparent complex things.”
    • Manage / lead testers to become great; Qoute by Michael Bolton: “Learning does not stick if it does not sting a little bit.”
    • DEWT / Peer groups (DEWT = Dutch Exploratory Workshop on Test)
    • Acceptable level of risk

My following posts will be go deeper into the content or the conference and PaTS, but for now there is the following post by Jean Claude Grosjean; “Agile Testing Days 2011: Day 1 – What a fabulous day

Exercising agility

Sprint 0 – The idea

While getting in the mood for the Agile Testing Days 2011 in Potsdam I remembered  seeing a XP exercise sometime before. Essence of the exercise was to use the agile manifesto, its principles and to link them to development practices. This really felt as a great practical extension to my previous two posts on agile. Only trouble was that it had forgotten who wrote it so the exact steps were unknown to me. Also I remembered that it originally targeted an XP development audience and  that I felt that it needed some adjustment to be useful for (non-coding) testers.

At the same time I was getting ready to participate in the TestNet workgroup theme night with the Agile Testing workgroup. While e-mailing with the workgroup chair, Cecile Davis, I offered to use a recreation of the exercise suitable for testers and to translate the manifesto and principles into Dutch along the way.

Sprint 1 – Proof of Concept

At the following meet the other workgroup members liked my idea and draft version of the exercise. So at the TestNet workgroup night on November 8 I have presented and executed the exercise in front of some 30 people.

All in all the execution of the exercise was a success and I received positive feedback. Several participants were interested in doing the exercise with their team. But personally to be honest I quickly saw after the introduction that in spite of the good idea the six exercise steps I had written were not going to work with such a large group. So the exercise was saved by improvising, getting some goodwill and by offering additional explanation during and after the execution. Also I discovered that trying to keep it inside the period of an hour was a real challenge.

In good agile tradition I concluded the evening with a retrospective. Combining feedback from the participants with my own feedback I put new items on the (virtual) backlog. First one was to write this post, second to do a re-write of the exercise in general and thirdly to make an adaptation of the exercise steps in order to make them suitable both for smaller and larger groups. Both the content and the ‘new’ exercise steps of the exercise will be part of the following sections. The Dutch version, containing I believe the first complete translation of both the agile manifesto and the principles, will be placed on the TestNet ‘Agile Testen’ workgroup page [TBD].

Sprint 2 – The exercise (published version)

Preparation:

  • Large separately printed sheets of the agile manifesto and the twelve principles
  • As many  A4 / Letter print outs with the practices on them as you have participants
  • Post-its or similar
  • Markers
  • Introductory presentation about the agile manifesto, the principles and the practices
  • Know what you’re talking about
  • A room with possibility to form groups, walk around, and hang the print outs on to the wall.

Step 1:
Give a (short) presentation on the Agile Manifesto and its principles.
Depending on the agile experience of your audience this can be shorter or longer. Check my previous two posts for to get additional information or inspiration.

Step 1B (small groups):
Group members discuss what the principles mean to them and how they map to the Agile Manifesto.
Depending on the time you have and on the familiarity of your audience with the Agile Manifesto you can spent longer or shorter amounts of time on this. In larger groups this tends to take a lot of time and is probably best added into the step 1 presentation or left out all together.

Step 2:
Present the practices and handout a hardcopy version of them to all participants.
Depending on the experience of your audience (and the time you have) you can provide information about all of the practices (takes a lot of time), or let the audience indicate which ones need to be elaborated or skip explanation al together.

Step 3:
Have the group go up to all of the principles and have them discuss  and map the practices that they think fitting to the principle. Mapping should be visualised by writing the practice on to a post-it and sticking it to the top of the sheet if it is a good fit, to the bottom of the sheet if it only fits partially.
The group should form a consensus on the which practices fit and to what extend. The aim here is to learn by actively linking practices to agile principles by discussion and argumentation. When necessary the facilitator should provide additional information and elaboration about the practices. 

Step 3B (Large groups):
Divide the audience into smaller groups of no more than 6-8 people. Then execute step 3 by letting them go by all principles from a different starting point.
To aid the visual result you can hang multiple versions of each principle (as many as you create groups) next to each other and differentiate the groups choices in this way.

Step 4 (Large groups only):
Have each group pick one or two of the principles and let them explain why they chose those practices and why they placed them higher of lower on the sheet. The other groups should be able to ask questions.
This step is more about sharing information and reasoning then that it is about argumentation and justification. A single group will have done this automatically during the exercise.

Final step:
Have all members evaluate what they have learned and taken away from this exercise. One of the take aways should be to pick one or more practices that they are going to actively work on during the next period. Then execute a (quick) retrospective on the execution of the exercise.
Depending on the group size and time you have left this can be down by letting each member express this to the group or to let them do this individually later.

The agile manifesto, the principles and the practices

To conclude this is what it is all about. Starting of with the practices:

Practices

A practice is something that has proven to be valuable in a certain context and offer insight into solutions that may or may not work in your situation.

You can do this with the list below for a start. But preferably create you own shorter, longer or more suitable list. Basically you can do this with any kind of practices not just test related practices.

  • Manage risk
  • Execute your project in iterations
  • Embrace and manage change
  • Measure progress objectively and understandably
  • Test your own test cases
  • Leverage test automation
  • Team change management
  • Everyone can test (and owns quality)
  • Understand the domain
  • Describe test cases from the user perspective
  • Manage versions
  • Co-locate
  • Leverage patterns
  • Actively promote re-use
  • Rightsize your process
  • Continuously reevaluate what you do
  • Test Driven Development
  • Concurrent Testing
  • Pair Testing
  • Specification by example
  • Acceptance Test Driven Development
  • Plan sustainably
  • Stand-up meeting
  • Plan in relative units
  • Configuration management
  • Learn by doing
  • Whole team approach
  • Shared Vision
  • Use Case Driven Development
  • Risk Based Testing
  • Evolutionary (Test) Design
  • …..

The agile manifesto

We are uncovering better way of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

The principles:

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  • Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Simplicity–the art of maximizing the amount of work not done–is essential.
  • Continuous attention to technical excellence and good design enhances agility.
  • The best architectures, requirements, and designs emerge from self-organizing teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Remembered who the original exercise was from: David A. Koontz