Updated version after finishing the book
A while ago the lead business analyst in my project asked me if I could supply him with a list of non functional tests.
While aware that he probably meant if I could supply him with the non functional tests defined, executed and/or logged for our project I did not have a lot of time to search our test ware for them and I gave him the below list of definitions to digest as a starter.
I shared that list so the items on it can be used as a list to draw test ideas from. Since the initial publication I decided that this should be a living document. My first addition is based on the list of Software Quality Characteristics by Rikard Edgren, Henrik Emilson and Marin Janson (The testeye). I would like to continue to add definitions and possibly the supporting items that the Testeye added to theirs for each of them.
accessibility: usability of a product, service, environment or facility by people with the widest range of capabilities
accountability: degree to which the actions of an entity can be traced uniquely to the entity
adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.
analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified
availability: The degree to which a component or system is operational and accessible when required for use.
capacity: degree to which the maximum limits of a product or system parameter meet requirements
capability: Can the product perform valuable functions?
Completeness: all important functions wanted by end users are available.
Accuracy: any output or calculation in the product is correct and presented with significant digits.
Efficiency: performs its actions in an efficient manner (without doing what it’s not supposed to do.)
Interoperability: different features interact with each other in the best way.
Concurrency: ability to perform multiple parallel tasks, and run at the same time as other processes.
Data agnosticism: supports all possible data formats, and handles noise
Extensibility: ability for customers or 3rd parties to add features or change behavior.
change ability: The capability of the software product to enable specified modifications to be implemented.
charisma: Does the product have “it”?
Uniqueness: the product is distinguishable and has something no one else has.
Satisfaction: how do you feel after using the product?- Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?
compatibility: degree to which a product, system or component can exchange information with other products, systems or components, or perform its required functions, while sharing the same hardware or software environment
compatibility: How well does the product interact with software and environments?
Hardware Compatibility: the product can be used with applicable configurations of hardware components.
Operating System Compatibility: the product can run on intended operating system versions, and follows typical behavior.
Application Compatibility: the product, and its data, works with other applications customers are likely to use.
Configuration Compatibility: product’s ability to blend in with configurations of the environment.
Backward Compatibility: can the product do everything the last version could?
Forward Compatibility: will the product be able to use artifacts or interfaces of future versions?
Sustainability: effects on the environment, e.g. energy efficiency, switch-offs, power-saving modes, telecommuting.
Standards Conformance: the product conforms to applicable standards, regulations, laws or ethics.
compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.
confidentiality: degree to which a product or system ensures that data are accessible only to those authorized to have access
flexibility: the ease with which a system or component can be modified for use in applications or environments other than those for which it was specifically designed
install ability: The capability of the software product to be installed in a specified environment
integrity: degree to which a system or component prevents unauthorized access to, or modification of, computer programs or data
interoperability: The capability of the software product to interact with one or more specified components or systems
IT-ability: Is the product easy to install, maintain and support?
System requirements: ability to run on supported configurations, and handle different environments or missing components.
Installability: product can be installed on intended platforms with appropriate footprint.
Upgrades: ease of upgrading to a newer version without loss of configuration and settings.
Uninstallation: are all files (except user’s or system files) and other resources should be removed when uninstalling?
Configuration: can the installation be configured in various ways or places to support customer’s usage?
Deployability: product can be rolled-out by IT department to different types of (restricted) users and environments.
Maintainability: are the product and its artifacts easy to maintain and support for customers.
Testability: how effectively can the deployed product be tested by the customer?
learnability: The capability of the software product to enable the user to learn its application
localizability: How economical will it be to adapt the product for other places
maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment
maintainability: Can the product be maintained and extended at low cost?
Flexibility: the ability to change the product as required by customers.
Extensibility: will it be easy to add features in the future?
Simplicity: the code is not more complex than needed, and does not obscure test design, execution and evaluation.
Readability: the code is adequately documented and easy to read and understand.
Transparency: Is it easy to understand the underlying structures?- Modularity: the code is split into manageable pieces.
Refactorability: are you satisfied with the unit tests?
Analyzability: ability to find causes for defects or other code of interest.
modifiability: ease with which a system can be changed without introducing defects
modularity: degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components
non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability
operability: The capability of the software product to enable the user to operate and control it
performance: Is the product fast enough?
Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
Stress handling: how does the system cope when exceeding various limits?Responsiveness: the speed of which an action is (perceived as) performed.
Availability: the system is available for use when it should be.
Throughput: the products ability to process many, many things.
Endurance: can the product handle load for a long time?
Feedback: is the feedback from the system on user actions appropriate?
Scalability: how well does the product scale up, out or down?
pleasure: degree to which a user obtains pleasure from fulfilling personal needs
portability: The ease with which the software product can be transferred from one hardware or software environment to another
portability: Is transferring of the product to different environments and languages enabled?
Reusability: can parts of the product be re-used elsewhere?
Adaptability: is it easy to change the product to support a different environment?
Compatibility: does the product comply with common interfaces or official standards?
Internationalization: it is easy to translate the product.
Localization: are all parts of the product adjusted to meet the needs of the targeted culture/country?
User Interface-robustness: will the product look equally good when translated?
recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure
reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations
reliability: Can you trust the product in many and difficult situations?
Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
Robustness: the product handles foreseen and unforeseen errors gracefully.
Recoverability: it is possible to recover and continue using the product after a fatal error.
Resource Usage: appropriate usage of memory, storage and other resources.
Data Integrity: all types of data remain intact throughout the product.
Safety: the product will not be part of damaging people or possessions.
Disaster Recovery: what if something really, really bad happens?
Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?
replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment
reusability: degree to which an asset can be used in more than one system, or in building other assets
robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions
safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use
satisfaction: freedom from discomfort and positive attitudes towards the use of the product
scalability: The capability of the software product to be upgraded to accommodate increased loads
security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data
security: Does the product protect against unwanted usage?
Authentication: the product’s identifications of the users.
Authorization: the product’s handling of what an authenticated user can see and do.
Privacy: ability to not disclose data that is protected to unauthorized users.
Security holes: product should not invite to social engineering vulnerabilities.
Secrecy: the product should under no circumstances disclose information about the underlying systems.
Invulnerability: ability to withstand penetration attempts.
Virus-free: product will not transport virus, or appear as one.
Piracy Resistance: no possibility to illegally copy and distribute the software or code.
Compliance: security standards the product adheres to.
stability: The capability of the software product to avoid unexpected effects from modifications in the software
suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives
supportability: Can customer’ usage and problem be supported?
Identifiers: is it easy to identify parts of the product and their versions, or specific errors?
Diagnostics: is it possible to find out details regarding customer situations?
Troubleshootable: is it easy to pinpoint errors (e.g. log files) and get help?
Debugging: can you observe the internal states of the software when needed?
Versatility: ability to use the product in more ways than it was originally designed for.
survivability: degree to which a product or system continues to fulfill its mission by providing essential services in a timely manner in spite of the presence of attacks
testability: The capability of the software product to enable modified software to be tested
testability: Is it easy to check and test the product?
Traceability: the product logs actions at appropriate levels and in usable format.
Controllability: ability to independently set states, objects or variables.
Isolateability: ability to test a part by itself.- Observability: ability to observe things that should be tested.
Monitorability: can the product give hints on what/how it is doing?- Stability: changes to the software are controlled, and not too frequent.
Automation: are there public or hidden programmatic interface that can be used?- Information: ability for testers to learn what needs to be learned…
Auditability: can the product and its creation be validated?
traceability: The ability to identify related items in documentation and software, such as requirements with associated tests
usability: extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use
usability: Is the product easy to use?
Affordance: product invites to discover possibilities of the product.
Intuitiveness: it is easy to understand and explain what the product can do.
Minimalism: there is nothing redundant about the product’s content or appearance.
Learnability: it is fast and easy to learn how to use the product.
Memorability: once you have learnt how to do something you don’t forget it.
Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
Operability: an experienced user can perform common actions very fast.
Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
Control: the user should feel in control over the proceedings of the software.
Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
Consistency: behavior is the same throughout the product, and there is one look & feel.
Tailorability: default settings and behavior can be specified for flexibility.
Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
Documentation: there is a Help that helps, and matches the functionality.
Any additions to this list or comments on the definitions used are welcome.
On October 28, 2014 I visited Sogeti’s TMap Day.
This year’s focus was on their new book “Neil’s Quest for Quality”,
subtitled “A TMap© HD Story”.
This blog post describes my first impressions of that day. When I have read the book I will either return to this post and adjust it or write a separate one on the books content.
Note: I have read the book and have adjusted the post.
The book is written as a novel. It contains “the TMap Human Driven story, consisting of a business novel, building blocks, Mr. Mikkel’s musings and contributions from the innovations board in testing”.
A quality-driven approach
The new TMap presents itself as a quality-driven approach that is captured in the TMap© Suite which consists of the following three parts:
- TMap Next
- TMap© HD
- Building Block described in the TMap HD book and gathered, maintained and extended on http://www.tmap.net
Inspired on Lean, Agile and DevOps
Both authors explained their contribution to the book and expressed that the elements (more detail later) are mostly inspired on the market move towards Agile and DevOps and the whole is based on a Lean approach to software development.
Aldert Boersma, one of the authors, positioned quality-driven as follows.
The approach described in TMap HD© distinguishes five basic elements:
The (short) TMap HD descriptions for these concepts are:
Only people realize moving from “Testing according to TMap” to “Testing with TMap”.People with a broad knowledge of quality and testing that is.
Make things as simple as possible – but not more simple than that.
Start small and work from there. A complicated process will only lose focus on the result.
Integration with respect to testing denotes to a shared way of working, with a shared responsibility for quality. Testing is not a stand alone process.
Industrialization is important in improving testing and optimizing quality. Test tools are used to test more, more often, and faster.
The goal of TMap HD© is providing confidence in IT solutions through a quality-driven approach. Confidence is the fifth element over and above all others.
The first four elements should help you choose and apply building blocks that give (build) confidence. There is no prescribed set of blocks to use and main blocks themselves are seen as larger blocks of which smaller parts are chosen.
The now available building blocks are:
- Test manager
In the Test manager presentation the remark was made that fewer people will be TM but the activities will remain
- Test manager in traditional
- Test organization
- Test plan
- Product risk analysis
Oddly during the test management presentation and in the book this was called
Product Risk & Benefit Analysis but that is not part of the website (yet)
- Test strategy
- Performance testing
- Test approaches
- Test varieties
This replaces the current Test Levels and Test Types.
They are divided in Experience based and Coverage based
- Test manager in agile
- Permanent test organisation
- Model-based testing
- Quality policy
- Using test tools
- Quality-driven characteristics
- Integrated test organization
- Reviewing requirements
As a kind of closing motto for that morning the following phrase was handed as a summary for test managers “Do not report trouble but offer choices for the client”.
The TMap Day and the book have left me with rather distinct, and slightly contradicting, impressions.
A move in the right direction
Sogeti has embraced the fact that software development and with it software testing has changed over the last decades. The rise of Agile and Lean on the one side and the decline of Waterfall hasn’t gone unnoticed and the new brand certainly addresses these developments. There is also an influence that Sogeti has carefully tried to avoid in mentioning but that I believe is clearly present. That influence is Context-Driven Testing. In spite of naming it environment, circumstance or situation TMap HD shares the principle that based on the context software testing is and should be different and use that what best suites the context.
Ever since TMap, and TMap Next and particularly since their training and certification program appeared there has been a lot of criticism. This criticism especially focusses on the rigid factory school view on software and the limited value of TMap certification. While Sogeti itself did not react to this much many of the authors, most of them no longer working for Sogeti, did. The common denominator in their response was that the content was misunderstood and it was never meant to be followed by the letter.
Next to a wider interest for and influence of new software development approaches the emergence of building blocks shows that parts of the criticism is taken to heart and that TMap should and can now be used more flexible.
In the Netherlands we have a saying “Oude wijn in nieuwe zakken” (litterally Old wine in new wineskins) expressing that although it looks new it’s still the same old stuff. I believe this applies also TMap HD. Even with influence of Agile and Lean and the introduction of Building Blocks in the book I am still left the feeling that beneath the surface the nature of the solutions is still the same as before. This feeling is enhanced by the fact that TMap Next is still declared to be the core of the testing approach and that all existing training courses and certificates remain. Especially that last part has led to the rigid and limited testing approach that many Dutch testers employ.
So while in theory there is hope for positive change I fear that in reality nothing much will change for the better.
On August 26, 2014 @perftestman posted the following tweet:
“I think a lot of the CDT brigade just like the sound of they’re own voice –
everything’s CONTEXT driven! ”
It was her reaction on my earlier tweet, that got some attention:
Dear @pt_wire I use effective, documented systematic testing processes and methods. But not generic one size fits all. #CDT vs #ISO29119
For a while both James Bach and I reacted to it but the limitation of 140 characters, twittering on a mobile phone and not the least work made me stop engaging in the twitter feed. But it wasn’t the first time that people resort to this kind of fallacy thus avoiding discussing the actual content. This made think about it and in this post I will share my little thought exercise.
Addressing the first part of her tweet an answer to what is a brigade?
- A group of people who have the same beliefs
I wouldn’t compare CDT to a belief system but one cannot deny that the CDT community shares values, talks about it, sometimes feels strongly about them and expresses them out in the open. This part doesn’t apply specifically to CDT as e.g. the ISTQB brigade has similar behavior.
I also do not think that the CDT community is out to convert people to be context-driven testers. To convince by pointing out alternatives and different approaches -yes, answer questions – yes, share knowledge – yes, share experiences – yes, but testers are allowed to decide for them selves how and if they want to use it.
- A group of people organized to act together
The CDT community certainly regularly confers, meets each other (live or online) and challenges each other. They are however not organized as a single group or organization nor do I think they want to be. They are mostly independently and critical minds that have discovered that taking into account and using the context helps them to deliver more value and that building skill, acquiring and sharing knowledge and experience with others helps them to get better at doing that.
- A large group of soldiers that is part of an army
I fear that @perftestman intended to make use of this definition. This comparison however lacks credibility. The CDT community might seem to go battle figuratively over some subject and if feeling strongly about engaging in fierce discussion. And unfortunately occasionally a few of them even lash out to individuals that oppose CDT values or cannot handle the challenging style of discussion. But even if we form a community we are not so organized that we form a specific group of sorts nor do we intend to destruct or conquer. The community, essentially is a collection of likeminded professionals who rather aim to convince with facts, ideas and experience with the intend to advance the craft.
In a follow up post I will address an earlier tweet: “Too many of these so called test guru are impostors – detached from the realities of software development and #lovethesoundofyourownvoice” and compare the contributors to ISO 29119 to the CDT brigade and thus attempt to discover who are meant to be the potential impostors in this tweet and other occasions that spawned remarks like this.
During the CAST 2014 conference in New York I participated in a workshop by Laurent Bossavit and Michael Bolton called “Thinking critically about numbers – Defense against the dark arts”. Inspired by this workshop I took a look at one of the Dutch sites addressing news about software testing www. testnieuws.nl. This is the second post to come out my curiosity.
On May 23, 2014 Testnieuws hosted an article “Code inspectie is 80 procent sneller dan testen” (translated Code inspection is 80 percent faster than testing). The article itself provides little more substantiation for the claim than a reference to research by IfSQ. Both the claim and usage of this as header seems to only serve to grab the readers attention. The article ends with an invitation to read more about it and this leads to what I think is the actual article “Status ICT-projecten vaak compleet onduidelijk” (translated “Status of ICT projects often completely unclear). This article describes that, especially government, projects need to have more objective information and they need to get it earlier. This way it is possible to determine the status of a project. Andres Ramirez, Managing Partner of the OSQR Group states that “better software leads to better projects” and “better source code leads to better software” and “the quality of source code can be objectively assessed by the guidelines from the Institute for Software Quality”. The last quote explains the IfSQ abbreviation used earlier.
A little further in the article the claim is used and even extended “Code inspection is 80 percent faster than testing, and finding and repairing code is much cheaper than testing”. Ramirez also adds “Research by IfSQ shows that regular code inspection during the production process ensures that software can be changed more easily. Inspected software is 90 percent cheaper to maintain.”. I am choosing to ignore these last claims for now and proceed to IfSQ the look into their research.
The IfSQ – Research Findings Relevant to the IfSQ Standards hosts about 50 or so reference to articles and research results divided into sections “Why should you inspect software?”, “When should you inspect software?” and “What should you look for?”. Noticeably the focus is strongly on code quality and, to my opinion, therefore not really on software quality as such. Also there seems to be need to position code inspection opposite to testing as suggested by titles like:
- “Code inspection is up to 20 times more efficient than testing“
- “Code reading detected about 80% more faults per hour than testing“
The second title points to a page with the title “Inspection is 80% faster than testing” which indicates I am on the right track. The page however only repeats “Code reading detected about 80% more faults per hour than testing.” and provides two, non IfSQ, sources for it without further argumentation. The sources are:
- Comparing The Effectiveness of Software Testing StrategiesRecorded 1987 in IEEE Transactions on Software Engineering, SE-13, no. 12, December;
Pages 1278-96 by Victor R. Basili, and Richard W. Selby
- Software Inspections: An Effective Verification Process
Recorded 1989 in IEEE Software, 6, no. 3, May;
Pages 31-36 by A. Frank Ackerman, Lynne S. Buchwald, and Frank H. Lewski
So, at least in this case, so called research findings by IfSQ do not point to research executed by IfSQ themselves, nor were they involved as both sources are quite old and IfSQ was established much later in 2005. Next step to identify which of the two sources holds the quote.
The first article was easily found. In summary the article describes a scientific study that applies an experimentation methodology to compare three (then) state-of-the-practice testing techniques: a) code reading by stepwise abstraction b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage. It compares three aspects of software testing: fault detection effectiveness, fault detection costs and classes of faults detected. It focussed on unit testing code using a limited set of specific programs, known errors and a mix of academics and professional developers.
Although it found difference between the three test techniques with in some instances an identifiable hierarchy of code reading, functional testing and structural testing non of the results came anywhere near the claim of being 80% faster. So my conclusion is that this article cannot be a valid source for this claim.
I could only find the second at IEEE and as a result the article could only be read by buying it. Setting aside my initial dislike of paying for information, especially if it so old, I tried to buy it. Unfortunately the cash module did not like my dutch creditcard. As a result I stuck to a number (4) abstracts and a course summary of where the article was used.
The second article came closer to the IfSQ description of code inspection in describing how its done, what is needed for it and what it can measure. Still none of the abstracts said anything about being faster than testing. They did mention percentages around 80% for defects found by software inspections. This to me is a different claim. Sounds to me that a ‘leaky’ inference was made and worse another attempt to gain credibility by bringing testing in disrepute.
During the CAST 2014 conference in New York I participated in a workshop by Laurent Bossavit and Michael Bolton called “Thinking critically about numbers – Defense against the dark arts”. This workshop addressed the usage of numbers, measurements and the ability to influence their perception by choosing a certain representation, telling only parts of the information or leaving out context. Inspired by this workshop I took a look at one of the Dutch sites addressing news about software testing www. testnieuws.nl.
On Testnieuws I found an article about software errors invalidating school exams “Eindexamen ongeldig door softwarefout“. The story describes that the school inspection declared 1372 school exams invalid due to software errors. Especially software errors in the VMBO (a dutch school type) Math exam. Being a tester I was curious if could find out what had happened. The article written by Marco van der Spek referenced a newspaper article in “De Limburger“. Both articles were exactly the same, so my first conclusion was that in this case it was more of posting an article than writing an article. Something which is in line with the general practice of the site as it is more a collector of news than an actual writer of news stories. (To my knowledge the site only manned by a a few part-timers.)
Since the newspapers only indirect reference was mentioning the school inspection my search now focussed on looking for the source document there. On their site I found the original press release. The press release added much more detail with regard to all the exams, both written and digital, and differentiated the 1372 number as follows:
- 127 cases of technical problems (film not running, computer not working correctly)
- 112 cases of non technical problems (running out of time, fire alarm, usage of wrong materials)
That brought the number of potential Math exam problems down to 1133. The press release also mentioned what the problem with the Math exam was. It was not that the digital exam had produced obvious software errors or that functionality had not been available. The exam had offered the usage of an build-in calculator that handled negative numbers differently than many of the calculators VMBO students had used during the school year. The school inspection had ruled that this had given the students a disadvantage and offered them the possibility to redo the exam if they so wished. 1133 students have made use of this offer.
So what does this mean?
First that the number 1372 school exams is arbitrary and for the most part based on the number of students that used the offer to redo the Math exam. This could also have been half or double the number. So mentioning that specific number does not add value to the story.
Secondly I see an oracle problem with regard to the conclusion that the software was in error. Based on the information I cannot tell if either the built-in calculator had actually produced wrong answers when using negative numbers or if the calculators used by the students did during the school year. (Assuming there is another oracle in form of a scientifically established rule to use negative numbers we could which of the two produced incorrect answers.)
Finally I see a case of shallow agreement on what a software error is. For some a software error is something that occurs when the software runs into a situation where it can not handle or produce the data and responds by showing an error message. Others see a software error when the software, in this case the calculator, is not functioning according to specifications. The built-in calculator may or may not have been functioning according to its specifications, we cannot tell based on the information.
I do like the school inspections response to the situation. They did not call any of it a software error but only mentioned that 1372 digital exams were declared invalid. They did however see the potential disadvantage for the VMBO students, which I think is an excellent oracle, and offered a solution for them to redo the exam.
The previous two questions helped you to find why testing is necessary, what information you need to answer the first question (business value) and which test ideas help you deliver meaningful and relevant information. This post now extends this to areas that help you identify the circumstances in which you will have to do your work. It ends with a little advice that you should not take things for granted especially if you do not understand them.
Originally called Jean-Paul’s test this mnemonic represents a set of surveying questions that helps you identify working conditions. Once you have the answers to these questions you should check if and if so how this influences your ability to test and the ability to give more or less rich information to your stakeholders. You can use these questions to identify boundaries and constraints to your testing possibilities and address them or at least be and make others aware of them. These questions are by no means exhaustive, but in my opinion they form a good starting point in exploring your test context.
Are the Developers available?
Developers are physically close of far from you. They are more or less available in time or more or less organizationally accessible to testers. The ability or inability to work together with development can influence your risk assessments, your insight into risk areas, your knowledge about development solutions and what is or is not covered by development testing activities. Additionally when addressing developers it is good to know the preferences and willingness of each developer with regard to working with testers.
How soon do you have access to Information?
Of course you can use the FEW HICCUPPS mnemonic (James Bach, Michael Bolton) to improve and expand your test ideas, but gathering information about the intended product or solution is a main starting point and important reference to work with. So getting access to the sources of information or even better being involved in the information gathering should start as soon as possible.
Do you control the test Data?
My interpretation of test data here is wide in the sense that I do not only mean the ability to enter different types of inputs, in different variations and quantities. I also mean the ability to set up and load data sets creating test scenarios. And the ability to set or remove states in the software. Being able to control the data is beneficial in speeding up test execution, creating typical test situations and helps to quickly repeat the test case if necessary.
Having control of the test data is only one side of the story. The other side of the story is that you need to find the right ‘Trigger Data‘ to use. Trigger Data is any data item, set of data or data state specifically created and used to invoke, enable or execute your test case (scenario).
Are the Analysts available?
Like the developers the availability, both physical and in time, of the (business) analysts has an impact on the way you can interact with them. And like the developers analysts will have preferences and are more or less willing to work with testers. The impact of this might however be larger as analysts are often the first source of information about the products intended functionality and its means of satisfying the stakeholders needs and wants. They are often also a sort of gate(keepers) in communicating to business stakeholders. In that sense they can make a testers live more or less easy. Especially if testers are not expected to go outside of the projects boundaries.
Are the (other) Testers available?
In my experience working as the only tester on a project has an impact both on the way you work and to some extend to the quality of your work. Being able to pair, share thoughts or just have a chat with another tester can help you reconsider your work and develop new or different test ideas. The tester doesn’t necessarily have to be in your team to have this effect. Having other testers in your team brings both the benefit (and sometimes burden) of being able to divide work, get fast feedback on test ideas or test results and the possibility to focus or divert away from your strengths and weaknesses as a tester.
Do you have a quiet work Environment?
This question addresses two different aspects. The first aspect is the infrastructure. Do you know what it’s components are? Do you have a separated test environment? And if so are you its only user? Do you know how to get access to it? Are you allowed to change it yourself or do you need others to do it for your? Is your test environment similar to the real production environment?
Secondly it addresses the circumstances of your workplace. Do you work in isolation, in cubicles, or in a large office garden? Is your work uninterrupted or are you (in)voluntarily involved into other work processes and activities? Does that influence your performance and well-being? What the influences are obviously depends on you as person and the real circumstances. But it is wise to take note and consider possible consequences. There are many studies into this field. Here are few articles that might trigger your interest: “Designing the work environment for worker health and productivity” by Jacqueline C. Vischer; “Interrupt Mood” by Brian Tarbox or “Where does all that time go” by Michael Bolton.
Are the Stakeholders (that matter) available?
Stakeholders come in many forms and shapes, but they have one thing in common. They are in someway involved in the creation and/or use of the software solution. That not only means they need to be informed about the product that also means that they have expectations and opinions about the product itself, what it is used for, and what the products needs to able to do to make it valuable to them. As a tester you should identify these expectations and opinions and tailor your information about the product so that it is meaningful to them.
In theory the effort you put into gathering, tailoring and presenting that information is based on how much the stakeholders matters to the product, the project and to some extend to you the tester. I say in theory because to do so in practice the stakeholders need to be available and accessible. If they are not or if it is difficult you should take the extra time and effort into account of your testing and test reporting.
Is there (mandatory) Tooling?
There are many types of tools available in the market to capture requirements, store test cases, log test execution or manage bugs. And likewise there are many tools available to use during testing. As a tester you need to find out which tools there are, which tools you are allowed to use, and which tools are mandatory to use. You will might not know all the tools you are faced with or are unable to use a tool that you already know and like. In that case you will have to get used to the ‘new’ tooling and learn to use it. Additionally many tools have inbuilt workflows and processes that take away time from actual testing. As a tester you should be aware of this and take this into account when testing.
Whenever I start on a new test assignment or pick up a new work item I need to search and find its purpose, its meaning and I need to understand how the chosen requirements offer a solution to the problem that is solved. Sometimes that is really easy.
Say you visit the 36th International Carrot Conference before going to CAST 2013. You come home and decide to sell carrots for hungry rabbits online and you want to vary the amount of carrots or differentiate the type of carrot for different breeds of rabbits. You will need something like drop down list or input field to identify the different rabbit breeds. And except for the sudden urge to sell carrots this is fairly easy to understand and test.
If however you are asked to test the software implementation of calculating results for a new Credit Risk Model used by an international bank you will have a lot more to understand. If so I remind myself of the Poutsma Principle:
If something is too complex to understand, it must be wrong.
I use this principle to remind myself to keep asking questions until I either understand it or except the argumentation of it as proof. In either case it helps me to break down requirements to a level that makes me confident enough to start testing and daring enough so that I can also use my personal addition to the principle
And it is your job (as a tester) to proof it wrong.
If you want to know more about the Poutsma Principle you can follow this link.