Thursday, February 21, 2013

A Testing Process

Testing Process Example

Following the set of articles relate to Testing and TMMI implementation, today I will write about an example testing process


Testing process is the set of activities that control and support the testing model, allowing to organize all roles, entries, outputs and techniques that support the process.

The testing process is the global process that any delivery will follow in the testing factory. It is resume in the following draw:

Main roles that are identified:
  • Project Manager: person with the responsibility to deal the project, between other functions this person will indicate the acceptable risk tolerance, services to include in testing, level of acceptance, quality rules to apply, schedule and costs.
  • Testing Team: technical team that will handle all test cases, define the test scope (in accordance with the project manager), report defects (block defects and normal defects) and generate the testing reports.
  • Testing Manager: collaborates on test plan definition, in accordance to testing team and project management. Governs of testing process, testing techniques, testing reports templates and content, capacity management, availability management...
  • Development Team: group of people that are building the delivery (software and/or documentation). They have to update configuration management system with the alpha/beta version and support for Blocking defects
What Blocking and Normal defects are?
  • Defect is any deviation between what is expected or is documented and what is delivered
  • Blocking Defect is a defect that blocks the testing services execution, and need to be solved as soon as possible in order to continue with testing activities. A response time need to be accorded in order to allow this flexible model.
  • Normal Defect are all other defects, that should be fixed in next releases and they are reporting in the moment that they are detected. 

Activities description:
  • 1.- Request: project manager ask to testing team or provider to test a delivery, giving the necessary input, as for example: delivery scope, delivery dates or detected risks. 
  • 2.- Delivery study: testing team will study the request with the context information about this project (i.e. previous deliveries, previous testing reports, testing plan...)
  • 4.- Testing Plan: with the summary made from testing team, testing management will analyse the request and formalize a proposal of services, acceptance level, quality evidences, risk mitigation... and planning the testing services and budgeting this delivery. 
  • 5.- Approve: the project manager have to validate testing plan scope, budget and schedule, in accordance to their liabilities.
  • 6.- Waiting for Delivery: once the test plan is accepted, the development team will announce when the delivery is ready for test. In parallel the testing team and the testing management will be working to ensure platform availability, test cases design and update and all the previous tasks that could be performed without the delivery performed.
  • 7.- Testing Execution: execution of all testing services included in the scope of this delivery. In the case that the testing process is blocked due to a defect (i.e. needed libraries are not available to build, build process fails for any reason, installation process fails) a blocking defect will be register to stablish a collaboration canal between testing team and development team. 
  • 8.- Block Defect Resolution: development team has to solve this kind of defects as soon as possible. New instructions, new updates in configuration management system could be done.
  • 9.- Report Generation: once services are finished a report will be generated to summarized and give recommendation to the project manager or/and to the development team.
  • 10.- Revision: Test Management audit and review test execution to ensure the alignment to quality policy.
  • 11.- Report Submission: reports are distribute to stakeholder for their evaluation in order to decide what to do with the delivery (ask for a new delivery, accept the delivery or accept and promote to production the delivery)
  • 12.- Invoicing: testing manager will invoice all finished testing deliveries at the end of marked invoicing period.
It is just an abstract, to complete the process new activities, inputs and outputs should be included, but I think it is a good starting point.

The relation between this testing process and the tmmi processes area could be summarized in this matrix (with many considerations - it is just a initial approximation):









Thursday, February 14, 2013

Testing Services, overview

Testing Services


I would like to resume and describe a set of basic IT testing services that should be offered to any customer. In the context of TMMI implementation, there are several process area that focus their goal (specific goals and specific practices) on defining a testing policy and increase testing capabilities.

In this context, a testing service catalogue, should be the base for any testing organization.

The main TMMI processes area that should be involved on this are:

  • Test Policy and Strategy
  • Test Design and Execution
  • Test Organization
  • Non-functional Testing
The list is not exhaustive, but from my point of view, it is a good starting point.

Early Testing: a set of services focus on software engineering process that aims on early defect detection. Early Testing has it basics on increasing system quality by early defect detecting on the application lifecycle, decreasing defect impact and defect correction. As early a defect is detected as cheaper is the solution. A quality assurance plan starts in the begging of project lifecycle and it is applied all along the project life.




Early Testing services mission is to assurance that the technical and functional documentation is complete and according to end user requirements. A basic catalogue set of services should be:

  • Requirement Testing: related to testing requirement specification, aims on reviewing requirements, that should comply:
    • Concision: simple redaction, clear and understandable for all stakeholders
    • Completeness: requirements are written with enough information to be defined
    • Consistency: there are not contradictions between different requirements
    • Concreteness: requirements are not ambiguous, have only one interpretation
    • Verifiable: it could be check its fulfilment in the final product
  • Analysis Testing: activities applied to analysis phase aimed on reviewing how analysis techniques have been applied and to verify the traceability between requirements and analysis specification.
  • Design Testing: activities applied to design phase aimed on reviewing how design techniques have been applied and to verify the traceability between requirements, analysis and design specifications.
Software Testing: services useful to apply to a software delivery. Software delivery testing is organized in a wide set of testing services which are focussed on project risk management (this will be expose on other blog entry). The workflow of services could be like this one:



  • Build and Static Code Review: source code verification, in order to measure code rules fulfilment and code quality. Build process ensure that the organization is able to build the application from the source code delivered by the development team. This ensures independence from the development supplier and business continuity.
  • Setup Testing: is possible to deploy the application with the documentation and sources that have been delivered?
  • Functional Testing: focused on system functional behaviour, according to the test plan and the requirement specification.
  • Usability Testing: test system usability, ease of use, consistent behaviour... if the organization has a usability rules, they should be reviewed.
  • Accessibility Testing: if there is any accessibility requirements this service will review its fulfilment
  • Regression Testing: functional testing of previous functionality, that should be kept invariant. Usually regression testing is based on automed programs aimed to decrease testing cost.
  • Performance Testing: stress, capability and continuous load test in order to predict system behaviour on real conditions
  • Security Testing: test of security vulnerabilities of system implementation (see owasp recomendations)


Wednesday, January 2, 2013

TMMi Overview

TMMI overview


As software tester professional, I have had to organize and manage a testing office for public government for the last three years. From the early beginning I focused on reading several documents and getting around testing techniques and services and then I moved to test oriented method and to manage and communicate with stakeholders in my organization.

As a horizontal service we have to be able to adapt to different requirements and different approximations to delivery processes, quality criteria and summary reporting models.

After these years I concluded that we have to focus in several vectors of interest:
  • Technology capabilities: the testing office has to be able to assume and to be specialist in all IT aspect, in order to diagnose software characteristics, as functionality, security, usability, accessibility...
  • Planning and Management: we have to be able to plan every delivery and to manage what we are doing, effort, defects, delay.. every aspect related to testing process as a whole.
  • Stakeholder relation: know which roles are involved in a delivery and what have to be done in each moment by whom.
  • Methodology base: the project and every  customer of testing office have to be normalized, in the appropriate grade, in order to know the main lifecycle steps, and the normalization of every main aspect (architectural normalization, codification, exception management, integration capabilities...)
  • Acceptance model: we have to develop an acceptance model which consider in an objective way, which delivery has to be accepted, and which in proposal for a new delivery, in order to reach the acceptance quality level.
To give an answer to all these issues, Test Maturity Model Integration is a good approximation (TMMi), there are other approximations but I will focus on TMMi because for me is the wider and more useful of all of them.

TMMi presents a staged architecture to improve testing capabilities, organized like CMMi, it is organized on maturity levels, process areas, generic goals, generic practices, specific goals and specific practices.




As a brief resume I will expose the main content, in next publications I will try to expose how we have implemented it in our organization.

The maturity levels proposed by TMMi are:
  • Level 1. Initial: no testing processes are defined or applied in the organization. We could test but there are not any systematic approach. It could be consider not testing but debugging.
  • Level 2. Managed: the process areas defined in this level are:
    • Test Policy and Strategy: establish a test policy (test objectives, goals and strategic), and based on the test policy a test strategy will be defined. The strategy will define generic product risks and how to mitigate them. Also from the test policy Test Performance Indicators (TPI) will be defined.
    • Test Planning: defines a test approach for performing and managing testing activities. To determinate which requirements will be tested, to what degree, in which delivery. For every delivery a schedule will be given.
    • Test Monitoring and Control: control the test progress in order to detect any deviation and the product quality to apply any exit criteria that should apply.
    • Test Design and Execution: test specification (inputs, preconditions, execution, results, postconditions), test design techniques, execution of tests and defect reporting and collaboration to closure.
    • Test Environment: test environment consists on hardware requirements, data sets and every thing needed for testing purpose, with the goal of being independent and ensure the fiability of testing results
  • Level 3. Defined: 
    • Test Organization: the purpose is to identify and organize a group of people that is responsible for testing activities.
    • Test Training Program: develop a training program for testing group in order to improve knowledge.
    • Test Lifecycle and Integration: integration of development lifecycle and testing lifecycle.
    • Non-functional Testing: this process area aimed on improving testing capabilities for non-functional testing, in order to systematize performance, security or instalability testing.
    • Peer Reviews: verification of work products between different stakeholders, focussed on product and defect understanding.
  • Level 4. Measured
    • Test Measurement: the purpose is to identify, collect, analyze and apply mesuarements to ensure the effectiveness and efficiency of test processes.
    • Product Quality Evaluation: objective measurement of product quality with  a quantitative indicator.
    • Advanced Peer Reviews: add to Peer Review process area in level 3 an early product quality measure to enhance test strategy and test design previously to test execution.
  • Level 5. Optimization
    • Defect Prevention: identify and analyze commun causes of defects across all software development in the organization and define actions to prevent similar defects from occurring in the future.
    • Quality Control: statistical manage and control of the test process. Predict product quality.
    • Test Process Optimization: continuous  test process improvement.
In order to be exhaustive I will summarize the generic goals and practices:
  • GG2. Institutionalize a Managed Process
    • GP 2.01. Establish an organizational policy
    • GP 2.02. Plan the process
    • GP 2.03. Provide Resources
    • GP 2.04. Assign Responsibilities
    • GP 2.05. Train People
    • GP 2.06. Manage Configurations
    • GP 2.07. Identify and Involve Relevant Stakeholders
    • GP 2.08. Monitor and Control the Process
    • GP 2.09. Objectively Evaluate Adherence
    • GP 2.10 Review Status with Higher Level Management
  • GG3. Institutionalize a Defined Process
    • Establish a Defined Process
    • Collect Improvement Information





Monday, December 10, 2012

From Specification to Implementation (III)

Continuing with related post (From Specification to Implementation) I will described the last processes implementation in a public organization supported with RedMine.

The final photo (today, forward work will be done) consist on:

A set of processes:
  • Project Planning (CMMI)
  • Project Monitoring and Control (CMMI)
  • Requirements Development (CMMI)
  • Requirements Management (CMMI)
  • Release Management (ISO 20.000)
  • Defect Management 
A group of roles (RedMine roles):
  • Sponsor
  • Expert User
  • Functional Director
  • Project Manager
  • Project Team 
  • System Manager
  • System Team
And several management elements (RedMine trackers):
  • New Initiative
  • Goal
  • Project Plan
  • Requirement
  • Requirement Change
  • Sprint
  • Release
  • Version
  • Defect
  • Request for Change (development)
With these three group of elements the new methodology is ready to be used. The relationship between then is described in the following picture:

Further related documentation have been generated: templates, reports, user manual, change management... and everything is supported with wiki and document functionality.




Friday, December 7, 2012

From Specification to Implementation (II)

Starting from previous post, several approach could be done to get a process implementation in any organization.

From our experience, the process implementation consist on the following main steps:

  • Initial Situation: this set of task consists of several techniques and work focus on identifying organization needs, how the organization is working, strengths and vulnerabilities.
    • This work generates a document that identifies such things and propose a improvement process divided in three scenarios.
  • Process Implantation: once the goal of the first implantation has been defined, the work will be lead by defining working group for each process, in order to identify in detail how tasks activities are being delived nowadays, and how could be unified, attended and systematized to reach users needs and organization goals
Process Implantation finish when the organization has several process implanted with:
  • Entries
  • Outputs
  • Tasks
  • Roles
  • Reports
  • Metrics
  • A tool or a set of tools that support it
  • Training sessions - Change management
  • Initial support
From my point of view the supporting tool is the key to organize any process, and it will allow:
  • Monitorization
  • Measurement
  • Regulation
  • Support
  • Collaboration
Any ticketing tool will help to fit your needs, we have used RedMine as project management tool and we have good experiences with it.

Wednesday, October 17, 2012

Testing and Development: sustainable economic model

Test processes increase total cost of projects

This sentence could be the started point for a long discussion about truth and false that have to be analysed in detail.
  • Development teams should test every version, if we have a independent testing team for version verification, then we have increased total cost.
This affirmation is true and there is no much to add, the key concept that we have to realise is that this cost should be budgeted from the early beginning and also it should be in our project plan. Another point is that testing cost should be something less than 15% of development cost.
  •  Where is the advantage of testing? Do they improve software? Which is ROI of every euro spent in testing?
There are specific economic models to show the value of testing, and how 1 Euro spent on testing will save several in maintenance, operation, codification... it depends on when you invest on testing, as early you start as much money you will save
  • As there is a testing team, developers are lazy testing their own work
  • ... 


The main goal in the last months in my job has been to define a sustainable economic model  that allow business to have the benefits of testing with a predictable cost.

The model is sustainable with the following concepts:

  • For every software version we will know development cost and testing cost.
  • For every delivery of the same version we will have two metrics, derived from testing:
    • Number of corrections until the software has been deployed in test environment.
    • % of requirements passed as ok after testing is finished
  • Two indicators and two slas are associated to these metrics:
    • Number of corrections have to be less than X (for example three)
    • Fulfillment of requirements have to be greater than Y (for example 75%)
With this, it is possible starting a new version and the development team will estimated the cost (ie 100.000,00 €) and the testing team also (15% - 15.000,00 €). Then, when the first delivery arrives:
  • If both indicator are better than expected by the sla --> total version cost will be 115.000,00 € and the first version is a good candidate to be  promoted in a production environment (it will depend, but could be)
  • If any indicator is worse than expected --> a new release will be delivered, test cost will increase, but development team will be penalized --> (100.000,00 - 10.000) [Developement] + (15.000,00 + 5.000,00) [Testing] = 110.000 €
With this model, as many release are needed less development cost  and more testing cost, the total cost should be constant for final business.



Monday, August 13, 2012

From specification to implementation

Processes specification is a hard work that many times finish with a theoretical approach, high amounts of money spent and low or null use in the organization.

Main reasons for these situations are:

  • Processes goals is aligned to organization goals but they are not supported by end users, main affected of the new way of work that implies a new methodology.
  • Methodologies approach are so wide and nonspecific, they could not be implemented by default, but need a customization for each organization.
For me the reasons why a methodology does not fit in a organization is due to:
  • People who has to work with it are not involve in processes specification.
  • Processes definition is supported as documents, but not implemented in any tool.
  • Managed elements defined by the methodology are not valuable, they only suppose more work, but not simplified work and reporting.
In order to avoid these situations and minimize methodology pitfalls I recommend you to start this work with the following partners:
  • A short list of goals: few processes in a organization with a list of identifying problems and a set of metrics that evidence how to improve.
  • A short list of users: process owner and end user process. They have to be able to define their requirements and validate your work.
  • A corporate tool to implement, document and support the processes. This tool will be the base of any organization work.