Developing a clear and concise test plan is an essential requirement for successful application testing. A good test plan enables you to assess the quality of your application at any point in the testing process.
This section describes how you develop a test plan using the Test Plan module. Developing a test plan consists of the following:
Define Testing Strategy
Define Test Subjects
Define Tests
Create Coverage
Design Test Steps
Automate Test
Analyze Test Plan
Defining Testing Strategy
Outline a strategy for achieving your testing requirements, as defined in the Requirements module. Ask yourself two basic questions:
How should you test your application?
- Which testing techniques will you use (stress tests, security tests, performance and load tests, etc.)?
- How will you handle defects (severity classification, authorization to open and close defects, etc.)?
What resources do you require?
- What resources do you require in order to test (personnel, hardware, etc.)?
- When will the various tasks be completed?
For example, consider a flight reservation application that lets you manage flight scheduling, passenger bookings, and ticket sales. Testing will require designing both manual and automated tests. You could assign testing personnel with programming experience the task of designing automated tests, while non-programmers could design manual tests.
Defining Test Subjects
Consider the hierarchical relationship of the functions in your application. Divide the functions into subjects and build a test plan tree representing your application's functionality.
The test plan tree is a graphical representation of your test plan. It is a hierarchical list of tests organized according to topic, which describes the set of tests you will implement in order to meet your quality requirements. For example, the flight reservation application could require that you include Flight Finder, Book Flight, Flight Confirmation, and Flight Cost as test subjects.
Planning Tests
Plan tests for each subject in your test plan tree. Decide which types of tests to create for each subject, such as sanity level tests or regression tests. Then create the tests and assign them to a branch of the test plan tree. For example, in the flight reservation application, you could include the following tests under the subject Flight Finder: Airline Preference, Departing and Arriving Locations, Departing Date, Find Flight, Flight Time Preference, and Number of Passengers.
You can associate a test with specific defects. This is useful, for example, when a new test is created specifically for a known defect. By creating an association, you can determine if the test should be run based on the status of the defect.
Creating Requirements Coverage
Link each test in the test plan tree with a requirement or requirements in the requirements tree. By defining requirements coverage for a test, you can keep track of the relationship between the tests in your test plan and your original testing requirements. For example, in the flight reservation application, the tests in the subject Registration cover the requirement topic Customer Personal Information.
In addition, because tests are associated with defects, tests coverage provides complete traceability from requirements specification to defect tracking.
Designing Test Steps
Design the tests in your test plan tree. Create test steps describing the operations to perform and the expected results. After you define the test steps, decide whether to perform the test manually or automate it.
For manual tests you define steps, execute them on your application, and record the results of each one. Use manual tests in cases where the test requires a response by the tester. Manual tests include usability tests, one-time tests, tests that need to be run immediately, tests requiring knowledge of the application, and tests without predictable results.
For example, in the flight reservation application, tests that check if a dialog box is user-friendly require user response. Therefore, you could make these tests manual.
Automating Tests
Automating a test allows unattended execution of the test at high speed. It also makes the test reusable and repeatable. For example, you automate functional, benchmark, unit, stress and load tests, as well as tests requiring detailed information about applications.
After designing test steps, you can decide which tests to automate. Factors influencing test automation include frequency of execution, volume of data input, length of execution time, and complexity.
For automated tests, you can first design test steps and automate them by generating a test script. The test script can be WinRunner, QuickTest Professional, QuickTest Professional for MySAP.com Windows Client, LoadRunner, or Visual API-XP.
For example, in the flight reservation application, you can automate a test that checks whether the login mechanism works. After adding test steps, you create a test script. Then, using WinRunner, you complete the automated test script.
You can also create automated system tests that provide system information for a machine, capture a desktop image, or restart a computer.
Analyzing Your Test Plan
Review your test plan to determine how well it meets the goals that you defined at the beginning of the testing process. Then, analyze your test plan by generating reports and graphs.
For example, you can create a report that displays design step data for each test in a test plan tree. You can then use this report to help you determine your test design priorities.
In order to best ensure success of the testing process, it is recommended that you analyze your test plan throughout the testing process. Review the plan, and determine whether or not it matches your testing goals. Make adjustments to your test plan accordingly.
The Requirements Specification Workflow
You begin the application testing process by specifying testing requirements. Requirements describe in detail what needs to be tested in your application and provide the test team with the foundation on which the entire testing process is based.
By defining requirements, you can plan and manage tests that are more focused on business needs. Requirements are then linked to tests and defects to provide complete traceability and aid the decision-making process.
This section describes how you use the Requirements module to specify testing requirements. The requirements specification workflow consists of the following:
Define Testing Scopes
Create Requirements
Detail Requirements
Analyze Requirements
Defining the Testing Scope
The test team begins the testing process by gathering all available documentation on the application under test, such as marketing and business requirements documents, system requirements specifications, and design documents.
Use these documents to obtain a thorough understanding of the application under test and determine your testing scope—test goals, objectives, and strategies.
Ask the following questions when determining your testing scope:
What is the main purpose and direction of the application?
What are the major features of the application?
What is the relative importance of each element in the application functionality?
What are the critical or high-risk functions of the application?
What are your testing priorities?
Do your customers/end-users agree with your testing priorities?
What are your overall quality goals?
Creating the Testing Requirements Outline
Quality Assurance managers use the testing scope to determine the overall testing requirements for the application under test. They define requirement topics and assign them to the QA testers in the test team. Each QA tester uses Quality Center to record the requirement topics for which they are responsible.
Requirement topics are recorded in the Requirements module by creating a requirements tree. The requirements tree is a graphical representation of your requirements specification, displaying the hierarchical relationship between different requirements.
For example, consider a flight reservation application that lets you manage flight scheduling, passenger bookings, and ticket sales. The QA manager may define your major testing requirements as: Application Security, Application Client System, Application Usability, Application Performance, Application Reliability, Profile Management, Booking System, Flights Reservation Service and Reservations Management.
Defining Requirements
For each requirement topic, a QA tester creates a list of detailed testing requirements in the requirements tree. For example, the requirement topic Application Security may be broken down into the following requirements:
Each requirement in the tree is described in detail and can include any relevant attachments. The QA tester assigns the requirement a priority level which is taken into consideration when the test team creates the test plan.
Analyzing your Requirements Specification
QA managers review the requirements, ensuring that they meet the testing scope defined earlier. They assign the requirement a Reviewed status once it is approved.
To help review the requirements, you can generate reports and graphs.
You can then use the requirements as a basis for your test plan. The tests you create during the test plan phase should cover these requirements. These tests are also associated with defects, thereby providing complete traceability throughout the testing process.
Test Planning
Test Planning contains the following sections:
The Test Plan Workflow
The Test Plan Module at a Glance
Developing the Test Plan Tree
Linking Tests to Requirements
Building Tests
Creating Automated Tests
Working with System Tests
Test Execution
Test Execution contains the following sections:
The Test Lab Workflow
The Test Lab Module at a Glance
Creating Test Sets
Scheduling Test Runs
Running Tests Manually
Running Tests Automatically
Viewing Test Results
The Test Lab Workflow
As your application constantly changes, you run the manual and automated tests in your project in order to locate defects and assess quality.
This section describes how you run tests using the Quality Center Test Lab module. Executing tests consists of the following stages:
Creating Test Sets
Start by creating test sets and choosing which tests to include in each set. A test set is a group of tests in a Quality Center project designed to achieve specific testing goals. In the sample Mercury Tours application, for example, you could create a set of sanity tests that checks the basic functionality of the application. You could include tests that check the login mechanism, and tests that check the flight booking mechanism.
Scheduling Test Runs
Quality Center enables you to control the execution of tests in a test set. You can set conditions, and schedule the date and time for executing your tests. You can also set the sequence in which to execute the tests. For example, you can determine that you want to run test2 only after test1 has finished, and run test3 only if test1 passed.
Running Tests Manually
Once you have defined test sets, you can begin executing the tests. When you run a test manually, you execute the test steps you defined in test planning. You pass or fail each step, depending on whether the application's actual results match the expected output.
For example, suppose you are testing the process of booking a flight in the sample Mercury Tours application. You open the application, create a new order, and book a flight, following the instructions detailed by the test steps.
Running Tests Automatically
Once you have defined test sets, you can begin executing the tests. You can select all the tests in a test set, or specific tests. Your selection can include both automated and manual tests.
When you run an automated test, the selected testing tool opens automatically, runs the test, and exports the test results to Quality Center. When you run a manual test, an e-mail is sent to a designated tester, requesting him or her to run the manual test.
You can also run an automated system test to provide system information, capture a desktop image, or restart a computer.
Analyzing Test Results
Following a test run, you analyze test results. Your goal is to identify failed steps and to determine whether a defect has been detected in your application, or if the expected results of your test need to be updated. You can validate test results regularly by viewing run data and by generating reports and graphs.
The Defect Tracking Workflow
Locating and repairing application defects efficiently is essential to the development process. Defects can be detected and added to your Quality Center project by users during all stages of the testing process. Using the Quality Center Defects module, you can report design flaws in your application, and track data derived from defect records.
This section describes how you track defects. This includes adding defects, determining repair priorities, repairing open defects, testing a new build of the application, and analyzing defect data.
Adding Defects
When you find a defect in your application, you submit a defect to the Quality Center project. The project stores defect information that can be accessed by authorized users, such as members of the development, quality assurance, and support teams.
For example, suppose you are testing the Mercury Tours application. You just ran a test set that checks the billing information and one of the test runs revealed a defect when entering expiration dates for credit card numbers. You can submit a defect to the project. Note that you can associate this new defect with the test you ran for future reference.
You can also view, update, and analyze defects in the project. For information on adding defects, see Adding and Tracking Defects.
Reviewing New Defects
Review all new defects in the project and decide which ones to fix. This task is usually performed by the quality assurance or project manager. Change the status of a new defect to Open, and assign it to a member of the development team. You can also locate similar defects. If duplicate defects appear in the project, change their status to either Closed or Rejected, or delete them from the project.
Repairing Open Defects
Fix the Open defects. This involves identifying the cause of the defects, and modifying and rebuilding the application. These tasks are performed by application developers. When a defect is repaired, assign it the status Fixed.
For example, suppose the defect detected when entering expiration dates for credit card numbers was repaired in a new application build. You would update the defect status from Open to Fixed.
Testing a New Application Build
Run tests on the new build of the application. If a defect does not recur, assign it the status Closed. If a defect is detected again, assign it the status Reopen, and return to the previous stage (see Repairing Open Defects). This task is usually performed by the quality assurance or project manager.
Analyzing Defect Data
View data from defect reports to see how many defects were repaired, and how many still remain open. As you work, you can save settings that are helpful in the defect-tracking process, and reload them as needed.
Reports and graphs enable you to analyze the progress of defect repairs, and view how long defects have been residing in a project. This helps you determine when the application can be released.
About Adding and Tracking Defects
Defect records inform members of the application development and quality assurance teams of new defects discovered by other members. By sharing defect information, both the application development and defect repair processes are faster, more efficient, and more comprehensive. As you monitor the progress of defect repair, you update the information in your Quality Center project.
Suppose you detect a defect in the Mercury Tours application. When you initially report the defect in Quality Center, by default it is assigned the status New. A quality assurance or project manager reviews the defect, determines a repair priority, changes its status to Open, and assigns it to a member of the development team. A developer repairs the defect and assigns it the status Fixed. You retest the application, making sure that the defect does not recur. The quality assurance or project manager determines that the defect is actually repaired and assigns it the status Closed.
Adding New Defects
You can add a new defect to a Quality Center project at any stage of the testing process.
To add a new defect:
In the Defects module, click the Add Defect button . Alternatively, click the Add Defect button in the Quality Center main toolbar. The Add Defect dialog box opens.
Enter the relevant defect details. Note that a required field is mandatory and is displayed in red.
To clear the data in the Add Defect dialog box, click the Clear button .
You can add an attachment to your defect:
Click the Attach File button to attach a text file.
Click the Attach URL button to attach a URL.
Click the Attach Screen Capture button to capture and attach an image.
Click the Attach SysInfo button to attach information about your computer.
Click the Attach Clipboard Content button to attach an image from the Clipboard.
To eliminate duplicate or highly similar defects, you can:
Click the Find Similar Defects button to conduct a search for similar defects based on keywords from the Summary and Description fields.
Click the Find Similar Defects arrow and choose Find Similar Text to search for similar defects by specifying a text string.
You can check the spelling in the dialog box:
Click the Check Spelling button to check the spelling for the selected word or text box. If there are no errors, a confirmation message opens. If errors are found, the Spelling dialog box opens and displays the word together with replacement suggestions.
Click the Spelling Options button to open the Spelling Options dialog box, enabling you to configure the way Quality Center checks spelling.
Click the Thesaurus button to open the Thesaurus dialog box and display a synonym, antonym, or related word for the selected word. You can replace the selected word or look up new words.
Click the Submit button to add the defect to the project. Quality Center assigns a Defect ID to the new defect.
Click Close.
No comments:
Post a Comment