Thursday, October 30, 2008
Wednesday, October 29, 2008
Wednesday, July 9, 2008
Introduction to Test Automation Framework
"When developing our test strategy, we must minimize the impact caused by changes in the applications we are testing, and changes in the tools we use to test them."
--Carl J. Nagle
A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing.
1.Functional decomposition
2.Key Word Driven
3.Data-driven
4.Hybrid framework.
We will discuss more on same as we go on.
(To be continue... )
Wednesday, July 2, 2008
What to Look For in a Testing Tool ?
Choosing an automated software-testing tool is an important step, and one which often poses enterprise-wide implications. Here are several key issues, which should be addressed when selecting an application testing solution.
Test Planning and Management
A robust testing tool should have the capability to manage the testing process, provide organization for testing components, and create meaningful end-user and management reports. It should also allow users to include non-automated testing procedures within automated test plans and test results. A robust tool will allow users to integrate existing test results into an automated test plan. Finally, an automated test should be able to link business requirements to test results, allowing users to evaluate application readiness based upon the application's ability to support the business requirements.
Testing Product Integration
Testing tools should provide tightly integrated modules that support test component reusability. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. All products within the testing product environment should be based upon a common, easy-to-understand language. User training and experience gained in performing one testing task should be transferable to other testing tasks. Also, the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages.
Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser. The tests created for testing Internet or intranet-based applications should be portable across browsers, and should automatically adjust for different load times and performance levels.
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application end-users. With much of the testing responsibility shifting from the development staff to the departmental level, a testing tool that requires programming skills is unusable by most organizations. Even if programmers are responsible for testing, the testing tool itself should have a short learning curve.
GUI and Client/Server Testing
A robust testing tool should support testing with a variety of user interfaces and create simple- to manage, easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.
Load and Performance Testing
The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. It should also provide test results in an easy-to-understand reporting format.
Methodologies and Services
Test Planning and Management
A robust testing tool should have the capability to manage the testing process, provide organization for testing components, and create meaningful end-user and management reports. It should also allow users to include non-automated testing procedures within automated test plans and test results. A robust tool will allow users to integrate existing test results into an automated test plan. Finally, an automated test should be able to link business requirements to test results, allowing users to evaluate application readiness based upon the application's ability to support the business requirements.
Testing Product Integration
Testing tools should provide tightly integrated modules that support test component reusability. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. All products within the testing product environment should be based upon a common, easy-to-understand language. User training and experience gained in performing one testing task should be transferable to other testing tasks. Also, the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages.
Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser. The tests created for testing Internet or intranet-based applications should be portable across browsers, and should automatically adjust for different load times and performance levels.
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application end-users. With much of the testing responsibility shifting from the development staff to the departmental level, a testing tool that requires programming skills is unusable by most organizations. Even if programmers are responsible for testing, the testing tool itself should have a short learning curve.
GUI and Client/Server Testing
A robust testing tool should support testing with a variety of user interfaces and create simple- to manage, easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.
Load and Performance Testing
The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. It should also provide test results in an easy-to-understand reporting format.
Methodologies and Services
For those situations that require outside expertise, the testing tool vendor should be able to provide extensive consulting, implementation, training, and assessment services. The test tools should also support a structured testing methodology.
Monday, June 30, 2008
General Functional Automation Tool Architecture
The functional test automation tool provides the following functions:
Definition of tests: The definition of tests is done by recording an interaction with the applications
Definition of tests: The definition of tests is done by recording an interaction with the applications
Application to test: The record outputs a test script that can be edited by using the integrated script editor. In order to handle data-driven tests, the functional test automation toolprovides data access capabilities that help select the data source accurately. In order to facilitate the test result analysis, the tool can be used to define control points that can be set either on graphical objects or on data.
Execution of tests: The execution of tests is automated. Various test cases are executed by reproducing the recorded user interaction. Data-driven tests are executed based on the data access that was set during the definition period.
Reporting on test results: When the test is finished, the test results compare the test execution to the reference state (e.g. based on the control points set during the definition period).
Sunday, June 29, 2008
Test Strategy Guidelines
In order to make the most of test strategy, we need to make it reusable and manageable. To that end, there are some essential guiding principles we should follow when developing our overall test strategy:
- Test automation is a fulltime effort, not a sideline
- The test design and the test framework are totally separate entities
- The test framework should be application-independent
- The test framework must be easy to expand, maintain, and perpetuate
- The test strategy/design vocabulary should be framework independent
- The test strategy/design should remove most testers from the complexities of the test framework
Saturday, June 28, 2008
Typical Testing Steps
Most software testing projects can be divided into general tasks or steps.
Test Planning – This step determines which applications (or parts of applications) should be tested, what the priority level is for each application to be tested, and when the testing should begin. Applications with high levels of risk or heavy user volumes are identified.
Test Planning – This step determines which applications (or parts of applications) should be tested, what the priority level is for each application to be tested, and when the testing should begin. Applications with high levels of risk or heavy user volumes are identified.
Test Design – This step is for determining how the tests should be built and what level of quality is necessary for application effectiveness. During this phase, individual business requirements and their associated tests should be addressed within an overall test plan.
Test Environment Preparation – This step of the testing process is concerned with establishing the technical environment that the test(s) will be executed in. Without this step, the investment in test automation is at risk because of the inability to re-execute the tests.
Test Construction – At this step, test scripts are generated and test cases are developed based upon the test plans created during the design phase. Most of the time spent in automated testing is typically in the test construction phase.
Test Execution – This step is where the test scripts are executed according to the test plans. As test execution is the most tedious and repetitious step during a manual testing process, the automation of this step is where the most significant time savings are made.
Test Evaluation – After the tests are executed, the test results are compared to the expected results and evaluations can be made about the quality of an application. At this stage, application errors or problems are identified and appropriate corrective actions can be considered. Decisions can be made as to the readiness of the application for release.
Subscribe to:
Posts (Atom)