Confirmation testing (re-testing): Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
Test Strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects)
Test Execution: The process of running a test on the component or system under test, producing actual results
Test approach: The implementation of the test strategy for a specific project.
Test Plan: A document describing the scope, approach, resources, and schedule of intended test activities. It identifies among others test items, the features to be tested, the testing tasks, who will do each task, degree of test independence, the test environment, the test design techniques, and entry an exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
Test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.
Test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
Test basis: All documents from which the requirements of a component or system can be inferred. the documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test Data: Data the exists before a test is executed, and that affects or is affected by the component or system under test. Example - in a database Coverage (test coverage): The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Test Suite: A set of several test cases for a component or system under test, where the post condition of one test if often used as the precondition for the next one.
Test-ware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up, and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Test log: A chronological record of relevant details about the execution of tests.
Test summary report: A document summarizing testing activities and results. It is also contains an evaluation of the corresponding test items against exist criteria.
Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
V-model: A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.
Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test, and acceptance test.
Integration: The process of combining components or systems into larger assemblies.
Off-the-shelf software (commercial off-the-shelf software, COTS): A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Incremental development model: A development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. in some but not all versions of this lifecycle model, each sub project follows a "mini V-model" with its own design, coding, and testing phases.
Iterative development model: A development lifecycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
Agile software development: A group of software development methodologies based on iterative incremental development., where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
Agile manifesto: A statement on the values that underpin agile software development. The values are: individuals and interactions over processes and tools; working software over comprehensive documentation; customer collaboration over contract negotiation; responding to change over following a plan.
Efficiency testing: The process of testing to determine the efficiency of a software product.
Component testing (unit testing, module testing): The testing of individual software components. Synonym to program testing.
Stub: A skeletal or special-purpose implementation of a software component, used to develop a that calls or is otherwise dependent on it. It replaces a called component.
Driver (test driver): A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
Robustness testing: Testing to determine the robustness of the software product.
Test-driven development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
System testing: The process of testing an integrated system to verify that it meets specified requirements. Functional requirement: A requirement that specifies a function that a component or system must perform.
Non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability, and portability.
Test environment (test bed): An environment containing hardware, instrumentations, simulators, software tools, and other support elements needed to conduct a test.
Acceptance testing (acceptance, user acceptance testing): Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers, or other authorized entity to determine whether or not to accept the system.
Maintenance: Modification of a software product after delivery to correct defects, to improve performance or other attributes or to adapt the product to a modified environment.
Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing (field testing): Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external testing for off-the-shelf software in order to acquire feedback from the market.
Test Type: A group of test activities aimed at testing a component or system, focused on a specific test objective, i.e. functional test, usability test, regression test, etc. A test type may take place on one or more test levels or test phases.
Functional testing: Testing based on an analysis of the specification of the functionality of a component or system.
Black-box testing (specification based testing): Testing, either functional or non-functional , without reference to the internal structure of the component or system.
Functionality testing: The process of testing to determine the functionality of a software product.
Interoperability testing: Also known as compatibility testing. The process of testing to determine the interoperability of a software product.
Security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
Security testing: Testing to determine the security of the software product. Performance testing: The process of testing to determine the performance of a software product.
Load testing: A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as to member or servers.
Usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions.
Maintainability testing: The process of testing to determine the maintainability of a software product.
Reliability testing: The process of testing to determine the reliability of a software product. Portability testing: Also known as Configuration testing. The process of testing to determine the portability of a software product.
Functionality: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
Reliability: The ability of the software product to perform its required functions under stated conditions for a specified number of operations.
Robustness: The degree to which a component or system can function correctly in the presence of invalid Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
Efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
Maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
Portability: The ease with which the software product can be transferred from one hardware or software environment to another.
Black-box (specification-based) test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
White-box testing (structure-based testing): Testing based on an analysis of the internal structure of the component or system.
Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage, or condition coverage.
White-box (structural-based) test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system.
Impact Analysis: The assessment of change to the layers of development documentation, test documentation, and components, in order to implement a given change to specified requirements.