Thursday, November 22, 2007

Software Quality Management

Software Quality Management

Quality

Conformance to Requirement.
Never ending improvements.
Fitness for use.
Meeting users’ composite expectation

QMS- Software Organization
1) Primary life cycle processes of acquisition, development, operation and maintenance of software.
2) Organizational processes of management, infrastructure, improvement and training.
3) Support life cycle processes of documentation, configuration management, V&V, joint reviews, audits and problem resolution.
4) Requirements regularly change.
- Not always easy to communicate
- Customer needs change
5) Unique product every time.
- Estimation and planning is difficult.
6) Intangible nature of the software
- Intellectual (Design) approach throughout.
- Inspection can be exhaustive.
7) Fast-changing technologies.
8) Documentation in software organization
-QMS Documentation (quality manual, procedures, guidelines etc)
-Project documentation (software requirements, design documents, code listing, project plans, quality plans, test plans, test cases etc.)
9) The extent of documentation depends on
- Size and type of the organization.
- Complexity and interaction of processes.
- Competence of personnel.
10) Software in response to specified customer requirements.
- Processes that define the customer requirements and track them through out the software life cycle.
11) Software in anticipation to customer needs
- Processes that define the software capability and constraints and control changes.

Approaches for defining software quality
Transcendent – excellence invariant over time, cost and requirements.
Product based – a function of product attributes.
User’s based – users perception of system performance.
Manufacturing based – compliance with formal product specification.
Value based – ratio of performance to cost.


Software quality
Software quality is called the conformance to explicitly stated functional and performance requirements, documented development standards, implicit characteristics

Software quality Management
A set of planned and systematic activities that ensure that the software processes and products confirm to requirements standards and procedures

Software quality importance
Software systems are increasingly being developed in areas that are affecting us in our daily lives.
The implication of the failures of such critical software systems on consumers can range from mere annoying inconveniences to serious life threatening situation.

Importance of quality- causes of poor quality
Specification phase

- Poorly defined system specification.
- Inadequate, incomplete and ambiguous requirements.
- Lack of specific quality objective.

Development phase
- Constraints of cost, time and man power.
- Varying level of skill
- Poor or inappropriate development facilities.
- Inadequate testing of system components.
- Uncontrolled frequent changes.
- Poor documentation.

Underlying reasons for poor quality
Lack of discipline
Management related problems
Lack of quality objectives
Inconsistent.


What is SQA
SQA is a planned and systematic pattern of actions that are require to ensure the quality in system.
- By developing software standards for procedure and products.
- By assessment of conformance to those standards.
- SQA is an umbrella activity that is applied through out the life cycle phases.

Goal of SQA
To improve software quality by appropriately monitoring both the software and development process that produces it.

Software Life Cycle
Three generic phases are
1 definition
2 development
3 maintenance

Definition phase
Focuses on what?
What information is to be processed?
What functions and performances are desired?
What interfaces are to be established?
What design constraints exist?
What validation criteria are required to define a successful system?

Development phase
Development phase focuses on the – how
How data structure are to be designed?
How software architecture are to be designed?
How procedure details are to be implemented?
How the design will be translated in to a code?
How testing will be performed?
Three specific steps in development phase
- Design
- Coding
- Testing


Maintenance phase
The maintenance phase focuses on change that is associated with
- Error correction
- Adaptation required as the software environment evolves.
- Enhancement brought about by changing customer requirement
- Reengineering carried out for performance improvement.
- Maintenance phase re-applies the step of the definition and development phases.

Software Engineering
Systematic approach indicates that software engineering provides methodologies of developing software as close to the scientific methods as possible.
Methodologies are repeatable and if applied by different people, similar software will be produced.
Goal of software engineering is to take software development closer to science and away from being an art.
Focus of software engineering is not developing software per se, but methods (for developing software) that can be used by various software projects.


Product Assessment
Ways of directly examining the product - static analysis and dynamic analysis
Static analysis
Audits
Inspection
Reviews (from requirement review to maintenance review)
Proofs (Correctness, …)
Symbolic execution
Code analysis
Dynamic analysis
Functional testing
Structural testing (Control flow, data flow)
Special purpose tests
- Mutation test
- Regression test
- Statistical test
Measurement.

Role of SRS
Bridges the communication gap between the client, the user and the developer.
Help client to understand their own needs.
SRS must correctly define all the software requirements, but no more.
The SRS should not describe any design, verification or project management details except design constraints.
Characteristics of a good software requirement specification
Unambiguous
Complete
Verifiable
Consistent
Modifiable
Traceable
Usable during operations and maintenance phase.
Software requirement specification
SRS is a specification for a perticular software product, program or set of program that does certain things
The SRS is a means of translating the ideas in the mind of the clients (the input) into a formal document (the output) of the requirement phase.


Quality through Software Processes
Software quality assurance activities include:
- Standards and documentation
- Contract review
- Vendor review
- Project management (Planning & Tracking)
- Review and Audits
- Software Testing
- Configuration Management
- Release and Delivery
- Software quality training

Acquisition Process
This process consists of the following activities.
- Initiation
- Request for proposal (Tender) preparation
- Supplier monitoring
- Acceptance and completion

Development Process
List of activities – this process consists of the following activities
1 process implementation
2 system requirement analysis
3 system architectural design
4 software requirement analysis
5 software architectural design
6 software detailed design
7 software coding and testing
8 software integration
9 software qualification testing
10 system integration
11 system qualification testing
12 software installation
13 software acceptance support

Operation process
This process consists of the following activities.
Process implementation
Operational testing
System operation
User support
Maintenance process
This process consists of the following activities.
Process implementation
Problem and modification analysis
Modification implementation
Maintenance review / acceptance
Migration
Software retirement
Supporting life cycle process
This process consists of the following activities
Process implementation
Problem and modification analysis
Modification implementation
Maintenance review / acceptance
Migration
Software retirement

Documentation process
This process consists of the following activities
Process implementation
Design and development
Production
maintenance

Configuration management process
This process consists of the following activities
Process implementation
Design and development
Production
maintenance

Configuration management process
Process implementation
Configuration identification
Configuration control
Configuration status accounting
Configuration evaluation
Release management and delivery

Quality Assurance Process
This process consists of the following activities
Process implementation
2. Product assurance
3. Process assurance
4. Assurance of quality system

Organization life cycle process
This clause defines the following organizational life cycle processes
Management process
Infrastructure process
Improvement process
Training process

Management process
This process consists of the following activities
Initiation and scope definition
Planning
Execution and control
Review and evaluation
closure

Improvement process
This process consists of the following activities
Process establishment
Process assessment
Process improvement

Training process
This process consists of the following activities
Process implementation
Training material development
Training plan implementation

Measurement
Measurement is the process by which numbers are assign to attributes of entities in the real world to describe them
An entity is an object (e.g. person, room) or an event (e.g. testing phase) in the real world.
An attribute is a feature or property of an entity. Each of the software entity may have multiple attributes (e.g. code inspected, no. of defects found, duration of the project)
metrics
Metrics are measurements, collection of data about project activities, resources, deliverables.
Metrics have to
Estimate projects
Measure project progress
Measure quality
Type of software metrics
Process
Product
project
Project metrics
Describe the characteristics of project and its execution
- schedules (size slippage)
- size (size slippage)
- effort (effort slippage)
- Number of software developers
- staffing pattern

Product metrics
Describe the characteristics of the product
Complexity
Performance
Functionality, usability, efficiency, reliability, portability, maintainability
Defect density

Process metrics
Effectiveness of methods and tools
- Effectiveness of defect removal during the development
- Pattern of testing defect arrival
- Response time to fix process
Process metrics can be used to improve the software development and maintenance

Software measurement
May apply to
- Final products
- Intermediate products (predictive methods)
May be
- Relative or binary (does it/ does it not exist?)
- Direct or indirect
- Tightly or loosely coupled

Quality factors
The factors that affect software quality can be categorized in to two broad categories
Factors that can be directly measured (e.g. errors, KLOC etc)
Factors that can be measured indirectly (e.g. usability, maintainability etc.)
Always identify the quality factors appropriate to customers, product and stakeholders

Software quality factors operational characteristics
Correctness – does it do what I want?
Reliability – does it do it accurately?
Efficiency – will it run efficiently on my hardware? (time and resource behavior)
Integrity – is it secure?
Usability – is it designed for the user?

Software quality factorsProduct Revision
Maintainability – can I fix it?
Flexibility – can I change it?
Testability – can I test it?

Software quality factorsProduct Transition
Portability – will I be able to use it on another machine?
Reusability – will I be able to reuse some of the software?
Interoperability – will I be able to interface it with another system?

Metrics for grading the software quality factors
Audit ability – the ease with which conformance to standards can be checked
Accuracy – the precision of computations and control
Communication commonality – the degree to which standard interfaces are used.
Completeness- the degree to which the implementation has been achieved.

Metrics for grading the software quality factors
Conciseness- the compactness of the program in terms of lines of code
Consistency- the use of uniform design and documentation techniques
Data commonality- the use of standard data structures and types
Error tolerance- the damage that occurs when the program encounters an error.

Metrics for grading the software quality factors
Execution efficiency- the run time performance of the program
Expandability- the degree to which the design can be extended.
Generality- the breadth of potential application of program component
Hardware independence- the degree of decoupling from the hardware
Modularity- the functional independence of the program components.
Operability- the ease of operation with the system
Security- existence of mechanism that protect the data and the program
Simplicity- the degree of understandability of the program without difficulty.
Traceability- the ability to trace a component back to the requirement.

ISO 9126 software quality characteristics
Functionality
- does it satisfy user needs?
Reliability
- can the software maintain its level of performance?
Usability
- how easy is it to use?
Efficiency
- relates to the physical resources used during execution
Maintainability
- relates to the effort needed to make changes to the software
Portability
- how easy can it be moved to a new environment?
External quality
- Changeability
- Testability
- Portability

Internal qualities
- Modularity
- Generality
- Expandability

- Self descriptiveness
- Simplicity
- Modularity
- Instrumentation

- Modularity
- Self descriptiveness
- Machine independence
- S/W system independence


Software characteristics
Each of the software characteristics is subdivided into the sub characteristics

Sub characteristics of functionality
Suitability
Accuracy
Interoperability
Compliance
Security

Sub characteristics of Reliability
Maturity
Recoverability
Fault tolerance

Sub characteristics of usability
Learn ability
Understandability
Operability

Sub characteristics of efficiency
Time behavior
Resource behavior

Sub characteristics of maintainability
Stability
Analyzability
Changeability
Testability
Sub characteristics of portability
Install ability
Replace ability
Adaptability
Constructive QA Techniques
Proven principles, tested techniques, best practices and state of art tools.
Adherence to development standards
Coding standards
Naming conventions
Documentation
Design standards
Life cycle models
Documentations
Requirements
Development environments ( case tools)

Configuration management
People ware

IEEE software engineering standards
ANSI/IEEE 730 standard for s/w QA plans
ANSI/IEEE 828 standard for s/w CM plans
ANSI/IEEE 830 Guide to s/w requirement specification
ANSI/IEEE 1208 standard for s/w review and audit
ANSI/IEEE1012 standard for s/w V&V plans
ANSI/IEEE 1074 standard for development s/w life cycle processes

Software Testing Methodologies

Definition of Testing
  • Establishing Confidence that a program does what it is supposed to do.
  • The process of executing a program or system with the intent of finding errors.
  • Detecting specification errors and deviations from the specifications.
  • Any activity aimed at evaluating an attribute or capability of a program or system.
  • The measurement of software quality.
  • Verifying that a system satisfies its specified requirements or identifying the difference between expected and actual results.

Testing

Testing is the process of executing a program (or Part of it) with the intention or goal of finding errors.

Testing objectives

Primary role of testing is not demonstration of correct performance but the exposure of hidden defects.
G J Myers

Why testing

Testing is primarily a validation task.
The longer the defect remains, the more expensive it is to remove it.
Areas impacted by modification should be identified and retested for regression.
Most usability aspects surface while testing.

Test phases

1. Establish test objectives
2. Design test case
3. Write test case
4. Review test cases
5. Execute the tests
6. Examine test results

Points to ponder

1. Testing is largely a problem of economics.
2. Exhaustive input testing is impossible
3. Each test case should provide maximum yield.
4. Yield is the probability that the test case will expose a previously undetected error.
5. Investment is measured by the time and the cost to produce, execute and verify test and communicate.
6. Investment is limited by schedule and budget
7. Art of test case design is really art of selecting those test cases with highest yield.
8. Second most important testing consideration is sequence of integration.

Test strategies

1. Black box: test to the specification (don't look inside code)
2. White box: test to the code (don’t look inside specification)

Testing strategies

1. Black box testing: testing of a system or component whose inputs, outputs and general functions are known, but whose contents or implementation are unknown or irrelevant.

2. Structural testing/white box/glass box testing: testing that takes into account internal mechanism of a system or component. Types include branch testing, path testing etc. synonymous with glass box testing and white box testing.

Black box Vs. White box

Black box (Specification based): use the specification document as your basis for testing coverage.
e.g. test every function described in the document.

White box (implementation based): use the code as your basis for testing coverage.
e.g. test every branch of every if-statement.

Black box / white box – a comparison

Black box
1. test are derived from functional design specification
2. will test to fail hidden function.
3. data driven.
4. require exhaustive input testing to detect all errors.

White box
1. requires knowledge of internal program structure and code
2. will fail to detect missing functions
3. logic driven testing
4. require exhaustive path testing to detect all errors.

Levels of testing

1. Unit testing- done by the developer at module level.
2. Integration testing: conducted by the project team in work parallelism.
3. System testing: conducted by project team or by separate testing team if any.
4. Acceptance testing: conducted by client either in developers site or at his site.

Black box testing

1. Conducted for integration testing, system testing, acceptance testing
2. Test case design methods
3. Equivalence partitioning method.
4. Boundary value analysis method.
5. Cause effect graph method
6. State transition testing.
7. Use case based testing.

Unit testing
1. Testing logical pieces of work functions, subroutines or logical units.
2. Generally conducted by the developer
3. Objectives for unit testing may be stated as
3.1. does the logic works properly.
3.2. is all the necessary logic present.
4. Unit test documentations to be maintained as a permanent record.

Integration testing

Both integration and testing are carried out in this phase
Units are combined one at a time and tested, till entire software integrated.
Integration as per policy
1.top down.
2. bottom up
3. modified methods.
In top down, functionality could be tested much early.
In bottom up, functionality checked at the end.
In either case interfaces are tested.

Bottom up testing

Terminal modules are tested in isolation
Next set of modules to be tested are modules that directly call these tested modules along with. Previously tested terminal modules
Repeat process until top is reached.
Bottom up testing requires module driver to feed test case input to interface of module being tested.

advantages

1. Advantageous if major flaws occur at bottom of program.
2. Test condition easier to create.
3. Test result easier to view.
4. Disadvantages.
5. Program as an entity does not exist until last module is added- design flaws will be detected only at the end.

Top down testing

1. Only module unit tested in isolation is top module in program structure
2. After this, modules directly called by this module are merged one by one.
3. Repeat process until all modules have been combined and tested

Stub module is rarely as simple as return statement because calling modules usually expects output parameters from called module
Usual approach is to wire in a fixed output, that is always return from stub
In some cases, stub module may return out to be bigger than simulated module

Advantages

It combines module testing, integration testing, and small amount of external function testing
Test cases can be written as external input once the I/O modules are inserted.
Feasibility or design flows can be detected early in the project
No need for module drivers

Disadvantages

Stubs may not be very simple

Modified top down

Each module is unit tested in isolation before it is integrated using top down
Requires stubs and drivers for each modules

Sandwich testing

Compromise candidate
Top down and bottom up testing are started simultaneously
Integration starts from both top and bottom and meet somewhere in the middle
Meeting point shall be determined by examining structure of program
Suitable for large programs such as OS.

Modified sandwich testing

Lower levels are tested in bottom up fashion
Modules in upper levels are unit tested before integration using top-down method

Regression testing

Testing a defect fix
Testing that a defect fix has not caused some other errors.
Both are done by regression testing by running a se of test cases run previously (capture and play back tools can be used)

System testing

All this stage, both the target hardware and software are available
Test cases derived using black box methodology from SRS
System testing has to be taken up only with the target hardware
In addition to functionality, other aspects such as usability, performance, recovery etc. tested.
System testing is not a process of testing the function of the complete system which has already been achieved by function testing
System testing is the process of attempting to demonstrate how the program does not meet its objective.

Facility testing

Test – Determine if every facility mentioned in objectives has been implemented
Procedure – Scan the objectives line by line and compare with user manual.
Examples – prompt the user to select one of the alternatives user should be able to specify range of values

Volume testing

Test - subject the program to heavy volumes of data
Procedure: Check fields, records and files to see if their sizes can accommodate all expected data (use an automated tool to create records)
Example: large volumes of data in client/ server applications

Load / performance testing

Test – Load the system with activity that simulates legitimate user activity. Statistics collected to predict what performance and response times user are likely to get.
Procedure: conduct load test by creating virtual users. Use a load test tool and create typical scenarios to simulate load. Use think time to simulate authentic user behavior.

Stress testing

Test: subject the application under test to peak volume of data in a short time. It is extremely important test for system that normally operates below capacity but may be severely stressed during peak demand. It is a type of performance testing.
Procedure: conduct stress test by creating virtual users. Use a test tool and create typical scenarios to simulate. Do not use think times as the idea is to exercise the system to fullest extent.

What must be in place before load test/ stress test?

Hardware in terms of DB server and application server must be functional
Network must be in place or statistics obtained will not be correct
Application must be ready in terms of core DB functionality like inserting records, updating records or preparing major reports.
Test scenarios must be ready
Realistic test DB must be created

Usability testing

Check for human factor problems
-Are output meaning full?
-Are error diagnostics straight forward?
-Does GUI have conformity of syntax conventions, format, style, abbreviations?
-Is it easy to use?

usability

1. It should not
-Annoy intended user in function or speed
-Take control from the user without indicating when it will be returned
2. It should
-Provide on-line help or user manual
-Consistent in its function and over all design

Security testing

Devise test cases that subvert the program’s security check:
1.Obtain passwords
2.Access idle terminals
3.Imitate valid users
4.Guess passwords
5.Check permissions of different user groups/ users
6.Check database security
7.Create more user than allowed in user group
8.Delete user groups like supervisor/ admin
9.Rename user groups like supervisor/ admin

Storage testing

Detect amount of main and secondary storage requirements of program
Determine capacity of system to store transaction data on a disk or in other files e.g. ten thousand records of 512 bytes in a single flexible disk?

Recovery testing

Determine ability of user to recover data or restart after a failure.
Verifies both recovery process and component of process of recovery
Examples of recovery testing:
-Loss of input capability
-Loss of communication lines
-Loss of database integrity
-Application system failure
-Operator mistake

Recovery ability check

Adequate back up data is preserved
-Back up is stored in a secure location
-Recovery procedures are documented
-Recovery personnel are assigned and trained
-Procedure for performing manual aspects of recovery are adequate

Install ability testing

This tests the installation procedure on:
-Clean system
-Over a previous version of itself
-Uninstall it completely
-Check for auto play functionality
-Check custom install, partial install, add install if any
-Check online registration if any
-Check installation of third party components

Acceptance testing

Final stage before handing over to the customer
Usually carried out by the customer
Test cases executed with actual data
Generally conducts subset of test conducted in system testing

Alpha testing

Tested at developer site by customer
Developer “looks over shoulder” and records errors and usage problems
Test conducted in a controlled environment

Beta testing

Formal acceptance test of product impossible
Beta testing conducted by one or more customer sites by end user of software
Live application environment cannot be controlled by developer
Customer records all problems encountered and reported to developer at regular intervals

Defect classification- Origin

Requirements defect
Design defect
Coding defect
Documentation defect
Bad fixes

What is a test tool?

Software that aids in planning, developing, executing, of the testing process for another software package with the intention of detecting errors, tracking and reporting them
They aid in setting up an unsupervised test facility for software testing.

Advantages of software testing tools

Testing is formalized
Testing process is automatically documented
Test plans can be reused
Defect tracking is systematic
Efficient because test scripts are developed that can be used in subsequent builds

Types of testing tools

Test planning, management and error tracking
Reviews and inspections
Test generation
Test execution

Tuesday, November 20, 2007

Quality Control

Basic Concepts of Quality Control

Quality control describes the directed use of testing to measure the achievement of a specified standard. Quality control is a formal use of testing. Quality control is a superset of testing, although it often used synonymously with testing. Roughly, you test to see if something is broken, and with quality control you set limits that say, in effect, if this particular stuff is broken then whatever you're testing fails.

Yet another way of looking at the difference between testing and quality control is to consider the difference between a test as an event and a test as a part of a system.


Basic Concepts of Quality Control

Quality control describes the directed use of testing to measure the achievement of a specified standard. Quality control is a formal use of testing. Quality control is a superset of testing, although it often used synonymously with testing. Roughly, you test to see if something is broken, and with quality control you set limits that say, in effect, if this particular stuff is broken then whatever you're testing fails.
Yet another way of looking at the difference between testing and quality control is to consider the difference between a test as an event and a test as a part of a system.


Quality Control The Importance of Test Cases

You must devise scenarios based on expected user behavior, scenarios that describe how a user will interact with the functionality.
Use these scenarios to create Test Cases consisting of the specific steps a user would follow to accomplish these scenarios.


Quality Control Some Issues to Consider

Quality control can be difficult when you find your testing resources limited or overextended. You will often find it impossible to test everything. You must develop some consistent Test Cases to check the major problem areas, automating tests if reasonable.
Testing and creating Test Cases is always a learning experience. As you test and refine your Test Cases you will find a balance between not enough testing and just plain overkill, and between extremely detailed Test Cases and simple spot checks.


Testing, Quality Control and Quality Assurance

Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)


QA Vs. QC

QA

Verification
Preventing Process
it prepares process and guideline documents
it review the report to collect the feedback
periodical reviews for process

QC

validation
detecting process
it implement these process and guideline documents
it prepares reports based on testing
implement the process


Quality Assurance
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project
Quality Assurance makes sure you are doing the right things, the right way.
e.g., are requirements being defined at the proper level of detail.

Quality Control

QC activities focus on finding defects in specific deliverables
Quality Control makes sure the results of what you've done are what you expected
e.g., are the defined requirements the right requirements?


Test Suite

A test suite is a set of machines configured as platforms for testing. These machines should represent the client-side environments of the majority of the audience.
A test lab would be a specially equipped and designed facility or space used for testing, specifically usability testing .

Test Suite - Goals

Provide "clean" environments for testing platform/browser compatibility for pages in development and in production, allowing a more objective view of what standard configurations would encounter.

Provide an increase in productivity by providing a means to rapidly test and review prototypes on all common browsers and platforms.

Provide environments for testing network connectivity and performance to the production servers over the open net (as opposed to testing over a "back door" LAN connection). This would duplicate the connections experienced by end-users.

Provide a "lab" for usability testing. This assumes that the test suite will be located within a space that allows for most of the machines to be in use at the same time, and in a way that allows for some level of observations of the users.


Designing the Test Suite


The browsers most likely to be used by your audience
The platforms most likely to be used by your audience
The ways in which different browsers and platforms interact
The relative importance of certain user profiles
The budget for testing


Rating the Importance of Problems

Severity: how bad the Problem is?
Priority : How soon it should be fixed?


Severity Guidelines

Severity 1: the widest scope of a problem, with the entire site affected.
infrastructure has failed (a server has crashed, the network is down, etc.)
a functionality critical to the purpose of the website is broken, such as the search or commerce engine on a commerce site
in some cases, a problem interfering with testing might be considered a sev1, if you are in a phase where a deadline hinges on the completion of testing

Severity 2:
a major functionality is broken or misbehaving
one or more pages is missing
a link on a major page is broken
a graphic on a major page is missing

Severity 3:
data transfer problems (like an include file error)
browser inconsistencies, such as table rendering or protocol handling
page formatting problems, including slow pages and graphics
broken links on minor pages
user interface problems (users don't understand which button to click to accomplish an action, or don't understand the navigation in a subsection, etc.)

Severity 4:
display issues, like font inconsistencies or color choice
text issues, like typos, word choice, or grammar mistakes
page layout issues, like alignment or text spacing


Who manages severity assignments?


quality assurance team should manage severity assignments for logged problems. The qa team will tend to log most of the problems, so they need to be careful and honest in their evaluation of severity.

Priority

Priority describes an assessment of the importance of a problem

Priority Guidelines

Critical priority: the priority is so high it must be done now. Critical items should be tackled first, because the effects of such a problem cascades down the site's functionality and infrastructure.

High priority: these are problems that are very important, and that are required before the next "big" phase, i.e., they must be solved before launch, or grand opening, or before the news conference, etc. Any problem interfering with a major site functionality is a high priority. Any problem that will make you or your site look stupid or incompetent or untrustworthy is a high priority.

Moderate priority: these are problems like a broken graphic or link on a minor page, or a page that displays badly in some browsers. Moderate problems can usually wait until the more important problems are cleaned up, a common approach during "crunch times".

Low priority: these are display issues affecting a few pages, such as types or grammatical mistakes, or a minor element that is wrong on many pages.


Who manages priority assignments?


QA team should not assign priority, for two reasons:
1. QA team should not assign priority, for two reasons
2. priority is usually a tool for guiding development and maintenance work. The issues considered in the decision of what gets built (or fixed) when can range far and wide outside of the QA team's focus




Monday, November 19, 2007

The Test Plan Workflow

The Test Plan Workflow

Developing a clear and concise test plan is an essential requirement for successful application testing. A good test plan enables you to assess the quality of your application at any point in the testing process.

This section describes how you develop a test plan using the Test Plan module. Developing a test plan consists of the following:

Define Testing Strategy

Define Test Subjects

Define Tests

Create Coverage

Design Test Steps

Automate Test

Analyze Test Plan


Defining Testing Strategy

Outline a strategy for achieving your testing requirements, as defined in the Requirements module. Ask yourself two basic questions:

How should you test your application?
  • Which testing techniques will you use (stress tests, security tests, performance and load tests, etc.)?
  • How will you handle defects (severity classification, authorization to open and close defects, etc.)?

What resources do you require?

  • What resources do you require in order to test (personnel, hardware, etc.)?
  • When will the various tasks be completed?

For example, consider a flight reservation application that lets you manage flight scheduling, passenger bookings, and ticket sales. Testing will require designing both manual and automated tests. You could assign testing personnel with programming experience the task of designing automated tests, while non-programmers could design manual tests.

Defining Test Subjects

Consider the hierarchical relationship of the functions in your application. Divide the functions into subjects and build a test plan tree representing your application's functionality.
The test plan tree is a graphical representation of your test plan. It is a hierarchical list of tests organized according to topic, which describes the set of tests you will implement in order to meet your quality requirements. For example, the flight reservation application could require that you include Flight Finder, Book Flight, Flight Confirmation, and Flight Cost as test subjects.

Planning Tests

Plan tests for each subject in your test plan tree. Decide which types of tests to create for each subject, such as sanity level tests or regression tests. Then create the tests and assign them to a branch of the test plan tree. For example, in the flight reservation application, you could include the following tests under the subject Flight Finder: Airline Preference, Departing and Arriving Locations, Departing Date, Find Flight, Flight Time Preference, and Number of Passengers.
You can associate a test with specific defects. This is useful, for example, when a new test is created specifically for a known defect. By creating an association, you can determine if the test should be run based on the status of the defect.

Creating Requirements Coverage

Link each test in the test plan tree with a requirement or requirements in the requirements tree. By defining requirements coverage for a test, you can keep track of the relationship between the tests in your test plan and your original testing requirements. For example, in the flight reservation application, the tests in the subject Registration cover the requirement topic Customer Personal Information.
In addition, because tests are associated with defects, tests coverage provides complete traceability from requirements specification to defect tracking.

Designing Test Steps

Design the tests in your test plan tree. Create test steps describing the operations to perform and the expected results. After you define the test steps, decide whether to perform the test manually or automate it.
For manual tests you define steps, execute them on your application, and record the results of each one. Use manual tests in cases where the test requires a response by the tester. Manual tests include usability tests, one-time tests, tests that need to be run immediately, tests requiring knowledge of the application, and tests without predictable results.
For example, in the flight reservation application, tests that check if a dialog box is user-friendly require user response. Therefore, you could make these tests manual.

Automating Tests

Automating a test allows unattended execution of the test at high speed. It also makes the test reusable and repeatable. For example, you automate functional, benchmark, unit, stress and load tests, as well as tests requiring detailed information about applications.
After designing test steps, you can decide which tests to automate. Factors influencing test automation include frequency of execution, volume of data input, length of execution time, and complexity.
For automated tests, you can first design test steps and automate them by generating a test script. The test script can be WinRunner, QuickTest Professional, QuickTest Professional for MySAP.com Windows Client, LoadRunner, or Visual API-XP.
For example, in the flight reservation application, you can automate a test that checks whether the login mechanism works. After adding test steps, you create a test script. Then, using WinRunner, you complete the automated test script.
You can also create automated system tests that provide system information for a machine, capture a desktop image, or restart a computer.

Analyzing Your Test Plan

Review your test plan to determine how well it meets the goals that you defined at the beginning of the testing process. Then, analyze your test plan by generating reports and graphs.


For example, you can create a report that displays design step data for each test in a test plan tree. You can then use this report to help you determine your test design priorities.
In order to best ensure success of the testing process, it is recommended that you analyze your test plan throughout the testing process. Review the plan, and determine whether or not it matches your testing goals. Make adjustments to your test plan accordingly.

The Requirements Specification Workflow

You begin the application testing process by specifying testing requirements. Requirements describe in detail what needs to be tested in your application and provide the test team with the foundation on which the entire testing process is based.
By defining requirements, you can plan and manage tests that are more focused on business needs. Requirements are then linked to tests and defects to provide complete traceability and aid the decision-making process.
This section describes how you use the Requirements module to specify testing requirements. The requirements specification workflow consists of the following:
Define Testing Scopes
Create Requirements
Detail Requirements
Analyze Requirements



Defining the Testing Scope

The test team begins the testing process by gathering all available documentation on the application under test, such as marketing and business requirements documents, system requirements specifications, and design documents.
Use these documents to obtain a thorough understanding of the application under test and determine your testing scope—test goals, objectives, and strategies.
Ask the following questions when determining your testing scope:
What is the main purpose and direction of the application?
What are the major features of the application?
What is the relative importance of each element in the application functionality?
What are the critical or high-risk functions of the application?
What are your testing priorities?
Do your customers/end-users agree with your testing priorities?
What are your overall quality goals?

Creating the Testing Requirements Outline

Quality Assurance managers use the testing scope to determine the overall testing requirements for the application under test. They define requirement topics and assign them to the QA testers in the test team. Each QA tester uses Quality Center to record the requirement topics for which they are responsible.
Requirement topics are recorded in the Requirements module by creating a requirements tree. The requirements tree is a graphical representation of your requirements specification, displaying the hierarchical relationship between different requirements.
For example, consider a flight reservation application that lets you manage flight scheduling, passenger bookings, and ticket sales. The QA manager may define your major testing requirements as: Application Security, Application Client System, Application Usability, Application Performance, Application Reliability, Profile Management, Booking System, Flights Reservation Service and Reservations Management.

Defining Requirements

For each requirement topic, a QA tester creates a list of detailed testing requirements in the requirements tree. For example, the requirement topic Application Security may be broken down into the following requirements:
Each requirement in the tree is described in detail and can include any relevant attachments. The QA tester assigns the requirement a priority level which is taken into consideration when the test team creates the test plan.

Analyzing your Requirements Specification
QA managers review the requirements, ensuring that they meet the testing scope defined earlier. They assign the requirement a Reviewed status once it is approved.
To help review the requirements, you can generate reports and graphs.

You can then use the requirements as a basis for your test plan. The tests you create during the test plan phase should cover these requirements. These tests are also associated with defects, thereby providing complete traceability throughout the testing process.

Test Planning

Test Planning contains the following sections:
The Test Plan Workflow
The Test Plan Module at a Glance
Developing the Test Plan Tree
Linking Tests to Requirements
Building Tests
Creating Automated Tests
Working with System Tests

Test Execution

Test Execution contains the following sections:
The Test Lab Workflow
The Test Lab Module at a Glance
Creating Test Sets
Scheduling Test Runs
Running Tests Manually
Running Tests Automatically
Viewing Test Results

The Test Lab Workflow

As your application constantly changes, you run the manual and automated tests in your project in order to locate defects and assess quality.
This section describes how you run tests using the Quality Center Test Lab module. Executing tests consists of the following stages:

Creating Test Sets

Start by creating test sets and choosing which tests to include in each set. A test set is a group of tests in a Quality Center project designed to achieve specific testing goals. In the sample Mercury Tours application, for example, you could create a set of sanity tests that checks the basic functionality of the application. You could include tests that check the login mechanism, and tests that check the flight booking mechanism.

Scheduling Test Runs

Quality Center enables you to control the execution of tests in a test set. You can set conditions, and schedule the date and time for executing your tests. You can also set the sequence in which to execute the tests. For example, you can determine that you want to run test2 only after test1 has finished, and run test3 only if test1 passed.

Running Tests Manually

Once you have defined test sets, you can begin executing the tests. When you run a test manually, you execute the test steps you defined in test planning. You pass or fail each step, depending on whether the application's actual results match the expected output.
For example, suppose you are testing the process of booking a flight in the sample Mercury Tours application. You open the application, create a new order, and book a flight, following the instructions detailed by the test steps.

Running Tests Automatically

Once you have defined test sets, you can begin executing the tests. You can select all the tests in a test set, or specific tests. Your selection can include both automated and manual tests.
When you run an automated test, the selected testing tool opens automatically, runs the test, and exports the test results to Quality Center. When you run a manual test, an e-mail is sent to a designated tester, requesting him or her to run the manual test.

You can also run an automated system test to provide system information, capture a desktop image, or restart a computer.

Analyzing Test Results

Following a test run, you analyze test results. Your goal is to identify failed steps and to determine whether a defect has been detected in your application, or if the expected results of your test need to be updated. You can validate test results regularly by viewing run data and by generating reports and graphs.


The Defect Tracking Workflow

Locating and repairing application defects efficiently is essential to the development process. Defects can be detected and added to your Quality Center project by users during all stages of the testing process. Using the Quality Center Defects module, you can report design flaws in your application, and track data derived from defect records.
This section describes how you track defects. This includes adding defects, determining repair priorities, repairing open defects, testing a new build of the application, and analyzing defect data.


Adding Defects

When you find a defect in your application, you submit a defect to the Quality Center project. The project stores defect information that can be accessed by authorized users, such as members of the development, quality assurance, and support teams.
For example, suppose you are testing the Mercury Tours application. You just ran a test set that checks the billing information and one of the test runs revealed a defect when entering expiration dates for credit card numbers. You can submit a defect to the project. Note that you can associate this new defect with the test you ran for future reference.
You can also view, update, and analyze defects in the project. For information on adding defects, see Adding and Tracking Defects.

Reviewing New Defects

Review all new defects in the project and decide which ones to fix. This task is usually performed by the quality assurance or project manager. Change the status of a new defect to Open, and assign it to a member of the development team. You can also locate similar defects. If duplicate defects appear in the project, change their status to either Closed or Rejected, or delete them from the project.

Repairing Open Defects

Fix the Open defects. This involves identifying the cause of the defects, and modifying and rebuilding the application. These tasks are performed by application developers. When a defect is repaired, assign it the status Fixed.
For example, suppose the defect detected when entering expiration dates for credit card numbers was repaired in a new application build. You would update the defect status from Open to Fixed.

Testing a New Application Build

Run tests on the new build of the application. If a defect does not recur, assign it the status Closed. If a defect is detected again, assign it the status Reopen, and return to the previous stage (see Repairing Open Defects). This task is usually performed by the quality assurance or project manager.

Analyzing Defect Data

View data from defect reports to see how many defects were repaired, and how many still remain open. As you work, you can save settings that are helpful in the defect-tracking process, and reload them as needed.
Reports and graphs enable you to analyze the progress of defect repairs, and view how long defects have been residing in a project. This helps you determine when the application can be released.

About Adding and Tracking Defects

Defect records inform members of the application development and quality assurance teams of new defects discovered by other members. By sharing defect information, both the application development and defect repair processes are faster, more efficient, and more comprehensive. As you monitor the progress of defect repair, you update the information in your Quality Center project.
Suppose you detect a defect in the Mercury Tours application. When you initially report the defect in Quality Center, by default it is assigned the status New. A quality assurance or project manager reviews the defect, determines a repair priority, changes its status to Open, and assigns it to a member of the development team. A developer repairs the defect and assigns it the status Fixed. You retest the application, making sure that the defect does not recur. The quality assurance or project manager determines that the defect is actually repaired and assigns it the status Closed.

Adding New Defects

You can add a new defect to a Quality Center project at any stage of the testing process.
To add a new defect:
In the Defects module, click the Add Defect button . Alternatively, click the Add Defect button in the Quality Center main toolbar. The Add Defect dialog box opens.
Enter the relevant defect details. Note that a required field is mandatory and is displayed in red.

To clear the data in the Add Defect dialog box, click the Clear button .
You can add an attachment to your defect:
Click the Attach File button to attach a text file.
Click the Attach URL button to attach a URL.
Click the Attach Screen Capture button to capture and attach an image.
Click the Attach SysInfo button to attach information about your computer.
Click the Attach Clipboard Content button to attach an image from the Clipboard.
To eliminate duplicate or highly similar defects, you can:
Click the Find Similar Defects button to conduct a search for similar defects based on keywords from the Summary and Description fields.
Click the Find Similar Defects arrow and choose Find Similar Text to search for similar defects by specifying a text string.
You can check the spelling in the dialog box:
Click the Check Spelling button to check the spelling for the selected word or text box. If there are no errors, a confirmation message opens. If errors are found, the Spelling dialog box opens and displays the word together with replacement suggestions.
Click the Spelling Options button to open the Spelling Options dialog box, enabling you to configure the way Quality Center checks spelling.
Click the Thesaurus button to open the Thesaurus dialog box and display a synonym, antonym, or related word for the selected word. You can replace the selected word or look up new words.
Click the Submit button to add the defect to the project. Quality Center assigns a Defect ID to the new defect.
Click Close.

Monday, September 17, 2007