Belgium Testing Conference 2012

QA vs.Testing: Antagonism or Symbiosis?

The header above was the theme of the last Belgium Testing Days in Brussels on March 12-14. During the call for papers for this conference, several months ago, I was in the middle of having SOx compliance established for one of my projects. The theme caught my attention since it represented one of my feelings on the compliance process at the time.

I work at an internationally operating bank which has a few consequences for the context in which I work. The most obvious consequence is that a bank uses either financial software or software that enables financial processes. As a result the (in-house) developed software gets extra special attention with regard to accessibility (or perhaps rather inaccessibility), integrity and confidentiality (AIC). Every piece of software that is built or bought by the bank gets a so-called AIC assessment and depending on the result of the assessment and certain amount of checks, controls and measures are mandatory.

The AIC assessment itself is essentially internal to the bank. But being a bank, and especially being an international bank, this means that on top of the internal regulations all kinds of external government and financial market regulations are imposed on it. The bank QA department translates these regulations or standards into internal processes and rules. For most of the high level business processes such a translation seems fairly straight forward. These processes are often both described and measured at the same way as the regulations. It gets more difficult if you drill down into the organization and start taking all the contributing activities and tasks into account such as in my case the development of software or more particularly the testing of software.

Organizational response

One of the financial organizations common responses is to apply and design standards and procedures together with any number of deliverables.

These standards are prescriptive of nature. They tell you in general terms what their idea is. But, depending on your QA department it gets more specific. They tell you what you must do, how you should do it, in what order you should do it and how you should call it.

The so designed procedures and processes describe the steps you are supposed to do given some standard situation. And since seeing is believing they also formulate how you should proof that you followed the process. In many cases such proof is the delivery of a number of deliverables. Deliverables can be a lot of things, but typically they are in the form of documents, test ware libraries or reports.  Given a certain standard they follow a fixed format both in terms of content, that is what should be described, and in terms of lay-out, that is how it should be described.

Quality Assurance

To my experience for the most part the somebody defining the standards to be used, the procedures to be followed and the deliverables to be created are not the developers or testers nor the customer, but is a typical staff department:

The Quality Assurance department

I see quality assurance not as a singular activity. In my opinion it is a group of activities. Activities that have a difference in focus.One part of quality assurance is focussed on making the chosen framework useable and applicable within the organization. QA as the designer. The second part of quality assurance is closely related to this as it acts as the controller of what was previously designed. To this end it translates the designs into to points of measurement and puts values to the measurement results. QA as the controller. These two might go by different names in some organizations, like quality management, but in my opinion this still is part of quality assurance. The third part of quality assurance is the part in which the actual software development related activity is taking place. It is the part that also executes the previously designed steps and  reports back on them. QA as executor.

At this point QA starts to be seen as testing which is captured in the following definition that is often used for both:

The process of validating and verifying that a software program/application/product meets: The requirements that guided its design and development;  works as expected; and can be implemented with the same characteristics

This definition has a certain appeal. It is understandable; it is similar to other process oriented methodologies; it aligns with the QA concept. But in my opinion it limits testing to checking.

Testing

The previous definition is not the definition I would use for testing. In my opinion the kind of information testing provides depends on the what the stakeholder values as important for the softwares quality. Therefore my definition is:

Software testing is a means to provide information to someone who matters about the product or service under test in a certain context at a certain time.

If I apply this definition to my AIC, SOx compliant context I find that the current solution QA has offered me does not meet this definition. Do not get me wrong I understand and agree that the change process and software development should be both traceable and accountable. I do however not believe that the solution is to enforce processes, procedures and deliverables that are judged by their presence and adherence to layout. What matters is the content that they should be presenting.

Not the process story but the testing story should be told

An example

This example describes in summary which measures the SOx team and I agreed on that are required by software development projects, and specifically testing, necessary to comply to Sarbanes-Oxley Act (SOx) regulation.

A SOx regulation review is targeted at the state of software development and testing at the moment of release to production. The intermediate states or progress steps in establishing traceability and documentation compliance are out of scope for the investigation.

Manual Testing

Testware management for manual testing follows the guidelines as summarized in the following steps:

  • A test plan or a similar structure identifies functional test objects that, minimally, covers the functionality as specified in the requirement or change documents.
  • For each of these test objects specific test activities are logged
  • The test structure identifies the release, individual RFC’s and relates them to the test activities.
  • A  full and complete overview of test objects, test activities and the test results should be established prior to release of the changes to production. This can be either in a testware management system, like preferably HP QC, or in another reviewable form
  • All tests should either have passed or if not passed have a logged defect and or     stakeholder decision attached that indicates that this is acceptable to go into production at this point in time

Automatic Testing

In essence the testware management for automated testing follows the same guidelines as manual testing. Main difference is that the way of documentation is adapted to the structure of automated testing:

  • A test automation tool or input data sheet used by the test automation tool shows the automated tests with a reference to the test object, functionality or RFC that they test
  • A log file (preferred), or checklist, shows whether the tests have been executed and if they have passed or failed
  • A full and complete overview of test objects, automated tests and the final      execution of tests with test results should be established prior to release of the changes to production
  • All tests should either have passed or if not passed have a logged defect and or     stakeholder decision attached that indicates that this is acceptable to go into production
  • If for automatic testing a self designed tool or framework is used the      functionality of the tool and the execution of the test cases should be validated by peer review. Results of the review should additionally be captured in the test report

The above steps should result in the following deliverables:

–          Basic testware describing the test design, test execution, test results and defects
–          Test Report, containing an advice for release

Test report

You might have noted that there is no mention of describing tests in advance, of following test scripts, nor of following standard or using templates. The SOx compliancy team’s focus is not on how the testing is executed during the project. Its focus is to see if the implemented changes are tested and that the executed tests and test results can be matched to them. Their aim is to establish that the changes do not have an unexpected or undesirable effect on the companies annual balance or regulatory capital. To that end they check if the changes are implemented as intended.

The SOx compliance essentially asks for what James Bach describes as three stories that describe the testing story:

  1. A story about the status of the product
  2. A story about how you tested it
  3. A story about the value of the testing

The only thing now left for me is to convince the QA department that even when I have not followed their standard and procedures and not used their templates I have thought about the reason behind their questions and am still able to supply the information they need. Only a bit faster and more naturally.

Testing is some value to someone who matters

Concern

I have a concern. We online testers have one thing in common: we care enough about our craft to take the time and read these blogs. That’s all very fine. However most testers, and this is based on my perception not research, most testers do not read blogs, or articles, or books or go to conferences, to workshops or follow a course. Well some of those testers do, but only when they think their (future) employer wants them to. And when they do they go out for a certificate that proofs they did so.

Becoming a tester

Regardless of where we start our carreer, be it in software engineering, on the business side or somewhere else, most testers start out with some kind of introductory test training. In the Netherlands, where I live, most of the time that means you’re getting a TMap, or sometimes an ISTQB training. And my presumption is that you get a similar message on what testing is everywhere:

  • Establishing or updating a test plan
  • Writing test cases (design; procedures; scripts; situation-action-expected result)
  • Define test conditions (functions; features; quality attributes; elements)
  • Define exit criteria (generic and specific conditions, agreed with stakeholders, to know when to stop testing)
  • Test execution (running a test to produce actual result; test log; defect administration)

But there are likely to be exceptions. For instance at the Florida Institute of Technology where Cem Kaner teaches testing.

Granted neither TMap nor ISTQB limit testing solely to this. For instance TMap starts of by defining testing as: “Activities to determine one or more attributes of a product, process or service” and up to here all goes well, but then they add “according to a specified procedure”. And there is where things start to go wrong. In essence the TMaps of the world hold the key to start you testing software seriously.  But instead of handing you down the knowledge and guide you to gather your own experiences they supply you with fixed roadmaps, procedures, process steps and artifacts. All of which are obviously easy to use for reproduction in a certification exam. And even this still could still be, as these methods so often promote, a good starting point to move on and develop your skills. Unfortunately for most newbies all support and coaching stops once they passed the exam. Sometimes even facing discouragement to look beyond what they have learned.

Non-testers

To make matters worse the continuing stance to make testing a serious profession has brought line managers, project managers, other team roles and customers the message that these procedures, processes and artifacts not only matter, but they are what testing is about. These items (supposedly) show the progress, result and quality of testing being undertaken. Line and process managers find it easy to accept these procedures, processes and artifacts as measurement items as they are similar to what they use themselves according to their standards. So if the measurements show that progress is being made and that all artifacts have been delivered they are pleased with the knowledge that testing is completed. Customers or end users go along in this but face limits in their belief of these measurements as they actually experience the end product. Like testers they are more aware that testing is never really ended and about the actual information about the product and not about testing artifacts.

So?!

New methods and approaches such as agile testing have brought the development roles closer together and have created a better understanding for the need and content of testing to both the team and the stakeholders. Other approaches, like context driven testing, focus more on enhancing the intellectually based craftsmanship of testing, emphasizing the need for effective and efficient communication, the influence of context on all of this and that software development is aimed at solving something. And thus the aim of testing is shifting from checking and finding defects to supplying information about the product. Regardless of this however and inspite of how much I adhere to these approaches I think they have a similar flaw as the traditional approaches. Like TMap or ISTQB neither of them go outside of their testing container enough to change the management and customer perception. They still let management measure testing by the same old standards of counting artifacts and looking at the calendar.

Challenge

I think we as a profession should seek ways of changing the value of testing to our stakeholders. To make them understand that testing is not about the process or procedure by which it is executed nor about its artifacts, but about the information and value to achieve business goals it supplies.

I myself cannot give you a pret-a-porter solution so I challenge you, my peers, to discuss with me if you agree on this vision and if you do to form a new approach for this together. I will gladly facilitate such a discussion and deliver its (intermediate) results to a workshop or conference at some point.

What’s in a name – Part 2, I am a quality assurer

This second post on the use of titles for software testing focusses on software quality assurance. Quality assurance, or QA, as such is not exclusive to software development. QA is practised in almost every other industry like e.g. car manufacturing or medicine.

Quality Assurance

Quality Assurance in essence involves monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with according to those same standards and procedures. Quality assurance is often seen as an activity to be executed independent of its subject.

Principles

The following two principles are included in the aim of Quality Assurance:

  • Fit for purpose; the product should be suitable for its intended purpose
  • Right the first time; faults in the product should be eliminated

The guiding principle behind Quality Assurance is that quality is malleable. This is achieved by working hierarchically according to detailed plans, use accurate planning and control, rationalise processes, standardise and uniform. In this sense quality assurance is continuation of Frederick W. Taylor’s ideas of Scientific Management. In line with this all skill and knowledge is transferred into standard processes, procedures, rules and templates and thus removed from individual workers.

Software quality assurance

By and in large software quality assurance follows the same lines of thought as quality assurance.

This part of the model then represents the assurance paradigm.

Like in quality assurance a tester in software quality assurance seeks the use of rationalized processes, standards and uniformity. She prefers the use of specific standards and methodologies. IEEE, ISTQB and TMap are well know examples of this. These methods provide the tester with (seemingly) clear guidelines to follow and artifacts to use so that if used properly software testing itself should provide adequate information about the state of the product and its readiness for use.

There is however a distinct difference between software quality assurance and quality assurance . In contrast with quality assurance in the traditional sence software does not have something tangible to measure. This is partly overcome with the use of software related quality attributes. Quality attributes are used to help guide the focus of testing and consist of characteristics that are considered typical to software development.

SQA – testing

Software quality assurance has the advantage that it is an understandable and recognizable proposition. To testers the appeal is that it provides a uniform solution for “all” situations and the handbook approach can be followed from the get go. To managers the appeal is that the procedures are recognizable in relation to mainstream management ideas and in combination with a phased interpretation of software development and testing, e.g. the V-Model, it is highly useful for project planning.

Following the process and filling templates alone (even) software quality assurance admits does not produce tests. So software quality assurance has teamed up with software engineering in that it uses test design techniques to identify and describe tests. In turn software quality assurance has developed means to limit the scope of testing with using a combination of quality attributes and risk assessment to order and if necessary limit the amount of tests to be designed and/or executed.

Improvement

In practise the use of software quality assurance, together with test engineering, has admittedly helped to establish software testing as a craft of its own. It has put software testing on the map and in general software testing is seen as part of the development process. But the use of software quality assurance as such has been no guarantee in stopping the development of ‘bad’ software. With bad software I mean software that did not provide a solution for the problem it was intended for.

In response to this limit to success several lines of improvement have evolved. One of them is to quality assess the software quality assurance itself with the use of models like TMMi or Test Process Improvement. These might help to forge a smoother adherence to the standards, procedures and templates, but the emergence of better software seems to be more of a side effect then that is the result of these new models themselves.

A second response is to more stress that quality assurance of software is an independent activity performed in separate (QA) departments that kind of police software development. Although this helps to adhere to external regulations and audit rules it has little effect in the pursuit of building good software that solves the or is fit to be used for problem it is intended for.

These models, and others like them, have widened the gap between quality assurance through process and the act of testing software through skill. So much so that increasingly quality assurance and testing are considered as different activities that oppose each other. In my opinion we need both aspects of software development to do a good job. How we mix them however should depend on the context in which you apply them. But in whichever context the testing of software should have the purpose of delivering useful information about the product. To do this, in my opinion testing skills come first and then knowledge and use of standards or procedures follows to aid or enhance the act of delivering useful information.