Basic Concepts of Quality Control
Quality control describes the directed use of testing to measure the achievement of a specified standard. Quality control is a formal use of testing. Quality control is a superset of testing, although it often used synonymously with testing. Roughly, you test to see if something is broken, and with quality control you set limits that say, in effect, if this particular stuff is broken then whatever you're testing fails.
Yet another way of looking at the difference between testing and quality control is to consider the difference between a test as an event and a test as a part of a system.
Basic Concepts of Quality Control
Quality control describes the directed use of testing to measure the achievement of a specified standard. Quality control is a formal use of testing. Quality control is a superset of testing, although it often used synonymously with testing. Roughly, you test to see if something is broken, and with quality control you set limits that say, in effect, if this particular stuff is broken then whatever you're testing fails.
Yet another way of looking at the difference between testing and quality control is to consider the difference between a test as an event and a test as a part of a system.
Quality Control The Importance of Test Cases
You must devise scenarios based on expected user behavior, scenarios that describe how a user will interact with the functionality.
Use these scenarios to create Test Cases consisting of the specific steps a user would follow to accomplish these scenarios.
Quality Control Some Issues to Consider
Quality control can be difficult when you find your testing resources limited or overextended. You will often find it impossible to test everything. You must develop some consistent Test Cases to check the major problem areas, automating tests if reasonable.
Testing and creating Test Cases is always a learning experience. As you test and refine your Test Cases you will find a balance between not enough testing and just plain overkill, and between extremely detailed Test Cases and simple spot checks.
Testing, Quality Control and Quality Assurance
Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)
QA Vs. QC
QA
Verification
Preventing Process
it prepares process and guideline documents
it review the report to collect the feedback
periodical reviews for process
QC
validation
detecting process
it implement these process and guideline documents
it prepares reports based on testing
implement the process
Quality Assurance
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project
Quality Assurance makes sure you are doing the right things, the right way.
e.g., are requirements being defined at the proper level of detail.
Quality Control
QC activities focus on finding defects in specific deliverables
Quality Control makes sure the results of what you've done are what you expected
e.g., are the defined requirements the right requirements?
Test Suite
A test suite is a set of machines configured as platforms for testing. These machines should represent the client-side environments of the majority of the audience.
A test lab would be a specially equipped and designed facility or space used for testing, specifically usability testing .
Test Suite - Goals
Provide "clean" environments for testing platform/browser compatibility for pages in development and in production, allowing a more objective view of what standard configurations would encounter.
Provide an increase in productivity by providing a means to rapidly test and review prototypes on all common browsers and platforms.
Provide environments for testing network connectivity and performance to the production servers over the open net (as opposed to testing over a "back door" LAN connection). This would duplicate the connections experienced by end-users.
Provide a "lab" for usability testing. This assumes that the test suite will be located within a space that allows for most of the machines to be in use at the same time, and in a way that allows for some level of observations of the users.
Designing the Test Suite
The browsers most likely to be used by your audience
The platforms most likely to be used by your audience
The ways in which different browsers and platforms interact
The relative importance of certain user profiles
The budget for testing
Rating the Importance of Problems
Severity: how bad the Problem is?
Priority : How soon it should be fixed?
Severity Guidelines
Severity 1: the widest scope of a problem, with the entire site affected.
infrastructure has failed (a server has crashed, the network is down, etc.)
a functionality critical to the purpose of the website is broken, such as the search or commerce engine on a commerce site
in some cases, a problem interfering with testing might be considered a sev1, if you are in a phase where a deadline hinges on the completion of testing
Severity 2:
a major functionality is broken or misbehaving
one or more pages is missing
a link on a major page is broken
a graphic on a major page is missing
Severity 3:
data transfer problems (like an include file error)
browser inconsistencies, such as table rendering or protocol handling
page formatting problems, including slow pages and graphics
broken links on minor pages
user interface problems (users don't understand which button to click to accomplish an action, or don't understand the navigation in a subsection, etc.)
Severity 4:
display issues, like font inconsistencies or color choice
text issues, like typos, word choice, or grammar mistakes
page layout issues, like alignment or text spacing
Who manages severity assignments?
quality assurance team should manage severity assignments for logged problems. The qa team will tend to log most of the problems, so they need to be careful and honest in their evaluation of severity.
Priority
Priority describes an assessment of the importance of a problem
Priority Guidelines
Critical priority: the priority is so high it must be done now. Critical items should be tackled first, because the effects of such a problem cascades down the site's functionality and infrastructure.
High priority: these are problems that are very important, and that are required before the next "big" phase, i.e., they must be solved before launch, or grand opening, or before the news conference, etc. Any problem interfering with a major site functionality is a high priority. Any problem that will make you or your site look stupid or incompetent or untrustworthy is a high priority.
Moderate priority: these are problems like a broken graphic or link on a minor page, or a page that displays badly in some browsers. Moderate problems can usually wait until the more important problems are cleaned up, a common approach during "crunch times".
Low priority: these are display issues affecting a few pages, such as types or grammatical mistakes, or a minor element that is wrong on many pages.
Who manages priority assignments?
QA team should not assign priority, for two reasons:
1. QA team should not assign priority, for two reasons
2. priority is usually a tool for guiding development and maintenance work. The issues considered in the decision of what gets built (or fixed) when can range far and wide outside of the QA team's focus
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment