Language:

Search

ISTQB Chapter 3 Testing Terms

  • Share this:
ISTQB Chapter 3 Testing Terms

Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis. 

Dynamic testing: Testing that involves the execution of the software of a component or system. Informal review: Also known as Adhoc review. Review not based on a formal (documented) procedure. 

Formal review: A review characterized by documented procedures and requirements, e.g. inspection. Moderator (inspection leader): The leader and main person responsible for an inspector or other review process. 

Entry criteria: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort needed to remove the failed entry criteria. 

Metric: A measurement scale and the method used for measurement. 

Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. 

Peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough 

Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. 

Static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool. 

Compiler: A software tool that translates programs expressed in a high order language into their machine language equivalents. 

Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. 

Test design technique: Procedure used to derive and/or select test cases. 

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability. 

Horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification, and test procedure specification or test script.) 

Vertical traceability: The tracing of requirements through the layers of development documentation to components. 

Test script: Commonly used to refer to a test procedure specification, especially an automated one. 

Test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the text execution schedule in their context and in the order in which they are to be executed.


Tags:
Farhan Tanvir

Farhan Tanvir