Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/lucgagan/software-testing-glossary

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.
https://github.com/lucgagan/software-testing-glossary

dictionary glossary lexicon

Last synced: 24 days ago
JSON representation

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.

Awesome Lists containing this project

README

        

# Software Testing Glossary

This glossary aims to be the most comprehensive compilation of software testing terms. Recognizing the evergreen nature of our industry, this document is not static. I invite and encourage readers to provide feedback on each term, so we may continually refine and enrich this resource.

Raise a PR to make contribute changes. Modifications will be submitted back upstream to https://ray.run/glossary.

## [A/B Testing](https://ray.run/glossary#ab-testing) [πŸ”—](https://en.wikipedia.org/wiki/A/B_testing)

A/B testing involves creating one or more variants of a webpage to compare against the current version. The goal is to determine which version performs best based on specific metrics, such as revenue per visitor or conversion rate.

## [Acceptance Test Driven Development](https://ray.run/glossary#acceptance-test-driven-development) [πŸ”—](https://en.wikipedia.org/wiki/Acceptance_test-driven_development)

Acceptance Test Driven Development (ATDD) is a development approach aimed at reducing defects by integrating testing as a core component of the development process. This ensures that the application meets quality standards.

## [Acceptance Testing](https://ray.run/glossary#acceptance-testing) [πŸ”—](https://en.wikipedia.org/wiki/Acceptance_testing)

Acceptance testing is conducted by potential end-users or customers to determine if the software meets the required specifications and is suitable for its intended use.

## [Accessibility Testing](https://ray.run/glossary#accessibility-testing)

Accessibility testing ensures mobile and web applications are usable by everyone, including individuals with disabilities such as visual or hearing impairments, and other physical or cognitive challenges.

## [Actual Result](https://ray.run/glossary#actual-result)

The actual result is the outcome obtained after a test is conducted. During the testing phase, the actual result is documented alongside the test case. After all tests, it's compared with the expected outcome, noting any discrepancies.

## [Ad Hoc Testing](https://ray.run/glossary#ad-hoc-testing)

Ad hoc testing is an informal, spontaneous approach to software testing. Its main objective is to identify vulnerabilities or issues as quickly as possible. This method is unstructured, conducted without detailed planning or documentation.

## [Agile Development](https://ray.run/glossary#agile-development) [πŸ”—](https://en.wikipedia.org/wiki/Agile_software_development)

Agile software development is an iterative method where requirements and solutions are collaboratively developed by cross-functional teams. It emphasizes adaptability and responsiveness over rigid planning.

## [Agile Testing](https://ray.run/glossary#agile-testing) [πŸ”—](https://en.wikipedia.org/wiki/Agile_testing)

Agile testing aligns with the principles of Agile software development. Unlike traditional approaches, testing starts at the project's outset with development and testing occurring simultaneously. This close collaboration ensures tasks are accomplished efficiently.

## [Alpha Testing](https://ray.run/glossary#alpha-testing)

Alpha testing aims to identify bugs before the product reaches the end-users. Conducted late in the development process but before beta testing, it helps ensure that the product is free from major issues.

## [Analytical Test Strategy](https://ray.run/glossary#analytical-test-strategy)

Analytical test strategies involve analyzing the test basis before executing the test. This strategy helps pinpoint potential problems early on, ensuring a more effective testing process.

## [API](https://ray.run/glossary#api) [πŸ”—](https://en.wikipedia.org/wiki/API)

An Application Programming Interface (API) is a set of rules allowing two applications to communicate. The term "Application" in this context denotes any software with a specific function. The API defines how these applications send and receive requests and responses.

## [API Testing](https://ray.run/glossary#api-testing)

API testing involves verifying and validating an API's performance, functionality, reliability, and security. The process includes sending requests to the API and analyzing its responses to ensure they meet expected outcomes. This testing can be done manually or using automated tools, helping identify issues like invalid inputs, poor error handling, and unauthorized access.

## [ASTQB](https://ray.run/glossary#astqb) [🌐](https://astqb.org/)

The American Software Testing Qualifications Board (ASTQB) is the U.S. national board for the International Software Testing Qualifications Board (ISTQB). It's responsible for the "ISTQB Certified Tester" program in the U.S. The ASTQB provides and manages certifications, accredits trainers, and approves training materials for software testing professionals. Earning an ISTQB certification through ASTQB ensures that an individual meets internationally recognized standards in software testing. Additionally, ASTQB promotes the value and importance of software testing as a profession within the U.S. software development and IT industries.

## [Automated Testing](https://ray.run/glossary#automated-testing) [πŸ”—](https://en.wikipedia.org/wiki/Test_automation)

Automated testing uses scripts to conduct repetitive tasks, increasing software performance and testing efficiency. It enhances test coverage and execution speed, making the software testing process more effective.

## [Availability Testing](https://ray.run/glossary#availability-testing)

Availability Testing, in the context of software testing, refers to evaluating a system's uptime, ensuring that the application or system remains accessible and operational to users as intended. The primary goal of this testing is to guarantee that the software meets its defined availability criteria and provides a reliable service without prolonged interruptions. This kind of testing often considers scenarios like system failures, maintenance, peak user loads, and network outages, and aims to determine the system's overall reliability and readiness for production deployment. Availability Testing is crucial for applications where continuous accessibility is paramount, such as e-commerce platforms, banking systems, and critical infrastructure services.

## [Back-to-Back Testing](https://ray.run/glossary#back-to-back-testing)

Back-to-back testing compares the results of two or more similar-functioning components to check for differences in their outputs.

## [Backward Compatibility](https://ray.run/glossary#backward-compatibility)

Backward Compatibility, in the context of software testing, refers to the ability of a software application or system to effectively function with earlier versions of itself or interface correctly with older input data formats, configurations, or hardware. In essence, when a software product is backward compatible, it ensures that users employing older versions won't encounter unexpected issues or malfunctions when interfacing with the newer iteration. Testing for backward compatibility is crucial during software upgrades or releases to make certain that changes introduced do not negatively impact existing users or break established functionalities. This practice prioritizes the user experience, ensuring seamless transitions and interactions between software generations.

## [Baseline Testing](https://ray.run/glossary#baseline-testing)

Baseline Testing is a type of non-functional testing where the performance or characteristics of a system or application are measured under specific conditions. This initial measurement serves as a "baseline" or benchmark against which future performance levels can be compared. The primary goal of baseline testing is to understand the current behavior of the system and set a standard for subsequent testing phases. Any deviations in future tests from this baseline can indicate performance issues, regressions, or other anomalies that might need addressing.

## [BDD](https://ray.run/glossary#bdd) [πŸ”—](https://en.wikipedia.org/wiki/Behavior-driven_development)

BDD (Behavior-Driven Development) is an agile software development approach that emphasizes collaboration between developers, testers, and domain experts. It focuses on understanding and defining the desired behavior of a system from the user's perspective. BDD encourages the use of simple, plain-language descriptions of software behavior, often structured as "Given-When-Then" scenarios. These descriptions serve as both requirements documentation and a basis for automated tests, ensuring that software development is aligned with user needs and expectations.

## [Beta Testing](https://ray.run/glossary#beta-testing)

Beta testing is the final testing phase before product release, where a near-complete version is provided to a select group of end-users. It aims to gather feedback on various aspects of the software, ensuring it meets user expectations.

## [Big Bang Testing](https://ray.run/glossary#big-bang-testing)

Big Bang Testing is an approach where all system units are linked simultaneously without regard for their interactions. This method can make error isolation challenging as it requires focusing on the interfaces of individual units.

## [Black Box Testing](https://ray.run/glossary#black-box-testing) [πŸ”—](https://en.wikipedia.org/wiki/Black-box_testing)

Black box testing assesses software without considering its internal workings. Typically focused on functional or acceptance testing, it can be done by anyone, regardless of their familiarity with the codebase.

## [Bottom-up Integration](https://ray.run/glossary#bottom-up-integration)

Bottom-up integration testing starts by testing lower-level modules first, then integrates and tests them with higher-level ones. During this process, "Drivers" may be used to assist in testing.

## [Boundary Testing](https://ray.run/glossary#boundary-testing)

Evaluates software by focusing on the boundary or edge values of the input domain.

## [BrowserStack](https://ray.run/glossary#browserstack) [🌐](https://www.browserstack.com/) [πŸ”—](https://en.wikipedia.org/wiki/BrowserStack)

BrowserStack is a cloud-based web and mobile testing platform that allows developers and testers to view and interact with their websites and applications across multiple browsers, operating systems, and real mobile devices without the need for an internal lab of virtual machines or devices. It provides instant access to a wide range of browser and OS combinations, ensuring that developers can test their products in real-world conditions. This helps in identifying and resolving compatibility issues that might not be evident on a single platform or browser. BrowserStack is particularly beneficial for ensuring cross-browser and cross-platform compatibility, and it integrates with many popular continuous integration tools to streamline the testing process.

## [BS 7925-2](https://ray.run/glossary#bs-7925-2)

BS 7925-2 is a standard for Software Component Testing. It outlines a process for component testing using specific test designs and measurement techniques, aiming to enhance the quality of both testing and software products.

## [Bug](https://ray.run/glossary#bug) [πŸ”—](https://en.wikipedia.org/wiki/Software_bug)

A bug is an error or fault in a program that leads to incorrect outcomes or crashes. It arises from flawed or incomplete logic and can cause the software to deviate from its expected performance.

## [Build Verification Testing](https://ray.run/glossary#build-verification-testing)

Build Verification Testing (BVT) is a set of preliminary tests performed on a newly built software product to ensure its basic functionality before it undergoes more in-depth testing. The primary goal of BVT is to quickly identify any major issues or showstoppers that might render the software unusable. If the build fails this testing phase, it's considered unstable, and detailed testing is typically postponed until the severe defects are addressed. BVT acts as a quality gate, ensuring that only builds meeting a certain quality threshold move forward in the testing lifecycle, thus saving time and resources on later-stage testing of flawed builds.

## [Canary Testing](https://ray.run/glossary#canary-testing)

Canary Testing is a technique used to detect issues by gradually releasing changes or updates to a subset of users. Often paired with A/B testing, it enables developers to evaluate and refine features based on feedback before a full release.

## [Chai.js](https://ray.run/glossary#chaijs) [🌐](https://www.chaijs.com/)

Chai.js, often simply referred to as Chai, is a BDD/TDD (Behavior-Driven Development/Test-Driven Development) assertion library for Node.js and browsers. It pairs seamlessly with popular JavaScript testing frameworks, such as Mocha and Jasmine. Chai provides developers with the capability to express assertions in a readable language, mimicking natural language constructions.

## [Change Control](https://ray.run/glossary#change-control)

Change Control, in the context of software testing, refers to a formal process used to ensure that modifications or updates to a software product or system are introduced in a controlled and coordinated manner. It involves documenting, evaluating, approving, and overseeing any changes made to the software, its environment, or associated documents during and after the development process.

## [Change Requests](https://ray.run/glossary#change-requests)

Change requests originate from stakeholders wishing to alter a product or its development method. They can range from defect reports to requests for new features or enhancements.

## [Chaos Engineering](https://ray.run/glossary#chaos-engineering) [πŸ”—](https://en.wikipedia.org/wiki/Chaos_engineering)

Chaos engineering tests a software's resilience by introducing random faults and disruptions. This method challenges applications in unpredictable ways, aiming to uncover unanticipated flaws and weaknesses.

## [Clean slate](https://ray.run/glossary#clean-slate)

A clean slate refers to the practice of resetting a system, application, or environment to its original or default state before conducting a test or evaluation. In the context of software testing, a clean slate ensures that tests are performed under consistent and repeatable conditions, devoid of any prior residues or configurations that might influence the outcome. For instance, when testing a web application, using a fresh and cache-cleared web browser ensures that no previously stored data or settings interfere with the current test session. This approach minimizes variables and helps in achieving accurate and reliable test results.

## [CMMI](https://ray.run/glossary#cmmi)

The Capability Maturity Model Integration (CMMI) is a collection of best practices in engineering, service delivery, and management. It aids organizations in enhancing their delivery capabilities, ensuring customer satisfaction through continuous improvement.

## [Code Coverage](https://ray.run/glossary#code-coverage) [πŸ”—](https://en.wikipedia.org/wiki/Code_coverage)

Code coverage measures the extent of code that has been tested, assisting in evaluating the test suite's quality. It identifies areas not executed during testing and is a form of white box testing.

## [Compatibility Testing](https://ray.run/glossary#compatibility-testing)

Assesses software performance in specific hardware, software, OS, or network conditions.

## [Concurrency Testing](https://ray.run/glossary#concurrency-testing)

Measures the system's performance under simultaneous or multi-user loads.

## [Control Flow Testing](https://ray.run/glossary#control-flow-testing)

Examines the paths that a program takes during its execution flow.

## [Cross-Browser Testing](https://ray.run/glossary#cross-browser-testing)

Ensures web applications function correctly across various web browsers.

## [Cypress](https://ray.run/glossary#cypress) [🌐](https://www.cypress.io/) [πŸ”—](https://en.wikipedia.org/wiki/Cypress_(software))

Cypress is an end-to-end testing framework designed for modern web applications. Unlike many other testing solutions, Cypress operates directly within the web browser, ensuring more consistent and accurate real-world testing scenarios. It provides a rich set of features and tools for writing tests, debugging in real time, and capturing screenshots or video recordings of test runs. Cypress supports both unit testing and full end-to-end testing, making it a versatile choice for developers and QA professionals. One of its notable features is its interactive test runner that allows developers to see commands as they execute while also viewing the application under test. Built on top of technologies like Mocha, Chai, and Sinon, Cypress offers a comprehensive and user-friendly environment for web application testing.

## [Database](https://ray.run/glossary#database) [πŸ”—](https://en.wikipedia.org/wiki/Database)

A database is an organized collection of structured information or data, typically stored electronically in a computer system. It is designed to store, retrieve, and manage data efficiently and securely. Databases allow users to access data in various ways, from simple queries to complex transactions. They can be classified based on their data model, such as relational, document-based, key-value, and graph databases. A relational database, one of the most common types, organizes data into tables with rows and columns. Databases are integral to numerous applications and systems, from websites to banking software, ensuring data integrity, availability, and consistency. They are managed using database management systems (DBMS), which provide tools and interfaces for interacting with the stored data.

## [Data Flow Testing](https://ray.run/glossary#data-flow-testing)

Centers on the variables and their values during computations and storage.

## [Decision Table Testing](https://ray.run/glossary#decision-table-testing)

Decision Table Testing is a black-box software testing technique used to determine the test scenarios for complex business logic. It involves representing conditions and their respective outcomes in a tabular form, simplifying the logic by highlighting every possible combination. Each row in the decision table represents a unique combination of conditions, leading to specific actions or outcomes, ensuring that all possible scenarios are considered. This method is especially useful when dealing with systems that have various input combinations and corresponding outputs, as it helps in systematically identifying and covering all possible test cases, reducing the risk of missed scenarios.

## [Defect Management](https://ray.run/glossary#defect-management)

Defect Management, in software testing, refers to the systematic process of identifying, recording, tracking, and resolving defects or bugs detected in a software application. It encompasses the entire lifecycle of a defect, from its discovery to closure, ensuring that issues are appropriately addressed and resolved before the software's release.

## [Dependency Testing](https://ray.run/glossary#dependency-testing)

Dependency Testing in software testing refers to the process of examining the interactions and dependencies between different software modules or components to ensure they interact correctly. This type of testing focuses on identifying issues that might arise when one component relies on another to function properly.

## [Documentation Testing](https://ray.run/glossary#documentation-testing)

Documentation Testing in software testing refers to the process of evaluating and verifying the quality, completeness, and accuracy of documentation associated with software products. This can include user manuals, help guides, installation instructions, API documentation, and more. The primary goal is to ensure that the documentation provides clear, consistent, and correct information to its intended audience, be it end-users, administrators, or developers. Inaccuracies or ambiguities in documentation can lead to user frustration, incorrect usage of the software, or even system failures. By conducting documentation testing, organizations aim to provide a seamless user experience, reduce support costs, and enhance the overall usability and understanding of the software product.

## [Dynamic Testing](https://ray.run/glossary#dynamic-testing)

Dynamic Testing, in the context of software testing, refers to the process of evaluating a software application or system through its execution. Unlike static testing, where code is analyzed without being executed, dynamic testing involves running the software to observe its behavior and identify potential defects. This form of testing checks the software's actual functionality and performance under various conditions. Common types of dynamic testing include unit testing, integration testing, system testing, and acceptance testing. The primary objective is to ensure that the software behaves as expected and meets its requirements when it is in operation.

## [Edge Testing](https://ray.run/glossary#edge-testing)

Edge Testing, often confused with "boundary testing," is a testing technique used to identify problems that might occur at the extreme operating parameters, often referred to as the "edges" of the software's capability or limits. It focuses on testing the system's performance or behavior at or near its capacity limits or operational extremes. For instance, if a software claims to support up to 1,000 concurrent users, edge testing would involve testing the system with close to, if not exactly, 1,000 users to observe its behavior. The goal is to ensure the system operates reliably at its boundaries and to uncover potential issues that arise only under extreme conditions.

## [End-to-End Testing](https://ray.run/glossary#end-to-end-testing)

Tests the complete functionality of an application process from start to finish.

## [Endurance Testing](https://ray.run/glossary#endurance-testing)

Endurance Testing, in the context of software, is a type of performance testing where the system is subjected to a consistent workload or stress for an extended period. The primary goal of endurance testing is to identify how the system behaves under sustained use and to uncover potential issues like memory leaks, resource depletion, or performance degradation that might manifest only after prolonged operation. By simulating a real-world long-running environment, endurance testing helps ensure that the software remains stable, reliable, and efficient over time, free from slowdowns or crashes that could result from extended usage.

## [Entry Criteria](https://ray.run/glossary#entry-criteria)

In software testing, Entry Criteria refer to the set of predefined conditions or requirements that must be met before a particular test phase can begin. These conditions ensure that testing is conducted in a structured manner and that the process is initiated only when the prerequisites are in place. Entry Criteria can encompass various aspects, such as the availability of the test environment, the readiness of test tools and test data, the completion of previous phases, or the sign-off of certain documents. Establishing clear Entry Criteria helps in avoiding premature testing, ensuring that resources are utilized efficiently, and maintaining the quality and effectiveness of the testing process.

## [Equivalence Partitioning](https://ray.run/glossary#equivalence-partitioning)

Equivalence Partitioning is a software testing technique used to reduce the number of test cases by dividing the input data of a software unit into partitions of equivalent data. Instead of testing every possible input, equivalence partitioning proposes that test cases can be designed for representative values from each partition. The underlying principle is that if the software behaves correctly for one value in a partition, it will behave correctly for all other values in the same partition, and vice versa.

## [Error Guessing](https://ray.run/glossary#error-guessing) [πŸ”—](https://en.wikipedia.org/wiki/Error_guessing)

Error Guessing is a software testing technique where the tester, relying on their experience, intuition, and knowledge of the system, tries to predict where defects might occur. Instead of following a systematic testing approach or predefined test cases, testers make educated guesses to identify potential problem areas or scenarios where the software might fail. The technique is based on the tester's familiarity with common errors, past defects, or specific system behavior. Error guessing is often used as a supplementary testing method, complementing more structured techniques, and is particularly effective in identifying unique or unanticipated issues.

## [Expected Result](https://ray.run/glossary#expected-result)

The anticipated outcome when a specific test case runs.

## [Exploratory Testing](https://ray.run/glossary#exploratory-testing)

Exploratory testing is a dynamic process where test design and test execution happen simultaneously. It leverages the tester's experience and is especially useful under tight time constraints.

## [Extreme Programming](https://ray.run/glossary#extreme-programming) [πŸ”—](https://en.wikipedia.org/wiki/Extreme_programming)

Extreme Programming (XP) is an agile software development method. Unlike Scrum which targets project management, XP emphasizes software development best practices.

## [Failover Testing](https://ray.run/glossary#failover-testing)

Failover Testing is a specific type of testing that evaluates a system's ability to automatically transfer control to a backup system or component when a failure occurs. The primary objective of failover testing is to ensure that, in the event of system or component malfunction, the failover process happens seamlessly without data loss or significant downtime. This test helps in validating the system's high availability and fault tolerance capabilities, ensuring that mission-critical applications remain operational even under unplanned adverse conditions. Failover testing is crucial for systems that require high availability, such as financial transaction systems, healthcare applications, and data centers.

## [False Negative](https://ray.run/glossary#false-negative) [πŸ”—](https://en.wikipedia.org/wiki/False_positives_and_false_negatives)

In software testing, a False Negative refers to a situation where a test fails to identify a defect or issue that is actually present in the system. In other words, the test incorrectly indicates that the software is functioning correctly when, in reality, there's a fault or bug. False negatives can give a false sense of security, leading teams to believe the software is of higher quality than it actually is. This type of error is particularly concerning because it might allow critical defects to go unnoticed and reach the production environment, potentially resulting in undesired consequences for users or businesses.

## [False Positive](https://ray.run/glossary#false-positive) [πŸ”—](https://en.wikipedia.org/wiki/False_positives_and_false_negatives)

In software testing, a False Positive refers to a situation where a test incorrectly identifies a defect or issue in the software when, in reality, there isn't one. Essentially, it's a test indicating a problem where none exists. False positives can arise due to various reasons, such as incorrect test data, flawed test conditions, or misconfigurations in the testing environment. While they might seem harmless, false positives can be detrimental as they can lead to wasted effort, resources, and time for development teams, potentially diverting attention away from genuine issues. Thus, it's essential to validate and rectify such occurrences to maintain the efficiency and accuracy of the testing process.

## [FAT](https://ray.run/glossary#fat)

Factory Acceptance Testing (FAT) confirms that newly manufactured equipment operates as intended and fulfills the customer's requirements.

## [Fault Injection Testing](https://ray.run/glossary#fault-injection-testing) [πŸ”—](https://en.wikipedia.org/wiki/Fault_injection)

Introducing faults deliberately to test the system's robustness.

## [Flaky Test](https://ray.run/glossary#flaky-test)

A flaky test in software testing refers to a test that produces inconsistent results: it might pass on one run and fail on another without any changes to the code, configuration, or environment. The unpredictability of flaky tests can undermine the reliability of a testing suite, making it challenging to determine whether a failure is due to a genuine issue in the software or merely the test's inconsistency. Flaky tests can arise from a range of factors, including timing issues, external dependencies, and non-deterministic factors. Addressing and eliminating flakiness is crucial to maintain trust in a testing process and to ensure that genuine defects are promptly identified and addressed.

## [Front-end Testing](https://ray.run/glossary#front-end-testing)

Front-end testing focuses on the user interface (UI) and its interactions within an application.

## [Functional Integration](https://ray.run/glossary#functional-integration)

Functional Integration relates products and services to an ecosystem to attract and retain customers.

## [Functional Requirements](https://ray.run/glossary#functional-requirements)

Functional Requirements define the expected behavior of a software system or application, specifying what the system should do in terms of processes, functionalities, and features. These requirements outline the interactions between the system and its users, as well as any other external systems or interfaces. They serve as a basis for the design, development, and testing phases of the software lifecycle.

## [Functional Testing](https://ray.run/glossary#functional-testing)

Functional testing checks if a software application's functions align with its requirements. It's a type of black box testing, meaning it doesn't involve the application's source code.

## [Future Proof Testing](https://ray.run/glossary#future-proof-testing)

Future-proof testing ensures a software application can adapt to future technological changes without extensive modification.

## [Fuzz Testing](https://ray.run/glossary#fuzz-testing) [πŸ”—](https://en.wikipedia.org/wiki/Fuzzing)

Fuzz Testing is a dynamic software testing technique that involves providing a system with random, malformed, or unexpected input data to identify vulnerabilities and weaknesses. The goal of fuzz testing is to trigger errors, crashes, memory leaks, or other unforeseen behaviors in the software, which can then be analyzed to find potential security threats or software defects. It's especially effective for uncovering issues that might be exploited by malicious attacks, such as buffer overflows or data injection vulnerabilities. Fuzzing is commonly used in security testing and is considered a proactive measure to enhance software robustness and reliability.

## [Gherkin](https://ray.run/glossary#gherkin)

Gherkin is a domain-specific language used primarily for behavior-driven development (BDD). It provides a structured and human-readable format to describe and document the desired behavior of software features. Gherkin's syntax uses plain language combined with specific keywordsβ€”such as "Given," "When," "Then," "And," and "But"β€”to define preconditions, actions, and expected outcomes. These Gherkin scenarios can then be utilized as both specifications for the system's behavior and the foundation for automated tests, making it a bridge between non-technical stakeholders and the technical team.

## [Glass Box Testing](https://ray.run/glossary#glass-box-testing) [πŸ”—](https://en.wikipedia.org/wiki/White-box_testing)

Glass box testing inspects a program's structure and formulates test data based on its logic.

## [Gorilla Testing](https://ray.run/glossary#gorilla-testing)

Intense testing of a specific module or feature, often by a tester or developer.

## [Grey Box Testing](https://ray.run/glossary#grey-box-testing) [πŸ”—](https://en.wikipedia.org/wiki/Gray-box_testing)

Grey box testing involves testing an application with partial knowledge of its internal workings. It aims to identify issues stemming from the code structure or its application.

## [Happy Path](https://ray.run/glossary#happy-path) [πŸ”—](https://en.wikipedia.org/wiki/Happy_path)

The "happy path" refers to the default scenario in which a system or application operates without any errors, exceptions, or unexpected user behavior. It represents the most straightforward and trouble-free journey through a given system or process, resulting in a successful outcome. When testing software, the happy path ensures that the core functionalities work as expected under optimal conditions. However, while it's essential to verify that the happy path operates correctly, comprehensive testing also requires examining edge cases, exceptions, and potential error scenarios to ensure robustness and reliability.

## [Headless Testing](https://ray.run/glossary#headless-testing)

Headless Testing refers to the practice of running browser automation tests without the graphical user interface (GUI) being visible or rendered. In this approach, tests are conducted using a "headless" browserβ€”a browser without a user interface. Since these tests do not require the browser's GUI elements to load visually, they tend to run faster and are particularly useful in environments where display devices, windows, or browsers are unnecessary or unavailable, such as continuous integration pipelines or server environments. Common tools for headless testing include Chrome's headless mode, PhantomJS, and Puppeteer. The primary advantage of headless testing is its efficiency, enabling faster feedback and more frequent test runs.

## [Heuristic Testing](https://ray.run/glossary#heuristic-testing)

Relies on experience-based techniques to identify defects.

## [IEEE 829](https://ray.run/glossary#ieee-829)

IEEE 829 is a standard for Software Test Documentation, dictating the structure for documents throughout the testing life cycle.

## [Impact Analysis](https://ray.run/glossary#impact-analysis)

Impact Analysis, in the context of software testing, refers to the process of identifying and assessing the potential effects of a change in the software. When a code change or a new feature is introduced, it's crucial to understand how this alteration might influence existing functionalities or components. By conducting impact analysis, teams can ensure that modifications don't introduce new defects, make efficient use of testing resources by targeting the affected areas, and reduce the risk of unforeseen issues in the production environment.

## [Incident Management](https://ray.run/glossary#incident-management)

Incident Management, in the context of Quality Assurance (QA), refers to the systematic process of identifying, recording, analyzing, tracking, and resolving incidents or anomalies detected during software testing or post-deployment. An incident in QA might be a defect, a bug, a discrepancy in documentation, or any issue that deviates from the expected behavior or standards.

## [Incident Report](https://ray.run/glossary#incident-report)

An incident report chronicles observed anomalies, capturing details like summary, steps, priority, and status. It's crucial for tracking and informing relevant stakeholders.

## [Incremental Testing](https://ray.run/glossary#incremental-testing)

Incremental testing is an integration testing technique that tests program modules post-unit testing. Using stubs and drivers, it isolates and examines each module for defects.

## [Independent Verification and Validation](https://ray.run/glossary#independent-verification-and-validation)

Independent Verification and Validation (IV&V) refers to a specialized process where an external organization or team evaluates the correctness and quality of a software product, independent of the developers and the development process. The primary goal is to ensure that the system meets its specified requirements and functions as intended.

## [Input Validation Testing](https://ray.run/glossary#input-validation-testing)

Input Validation Testing is a software testing technique that focuses on verifying the correctness and appropriateness of the data entered into a system. The primary objective is to ensure that the system can gracefully handle invalid, unexpected, or malicious input. By doing so, the system not only maintains its integrity and functions correctly but also safeguards against potential vulnerabilities like SQL injections, cross-site scripting, and other forms of attacks that exploit poorly validated input. Through Input Validation Testing, testers aim to identify weaknesses in input validation mechanisms and ensure that only valid and safe data passes through to the application's processing stages.

## [Inspection](https://ray.run/glossary#inspection)

Inspection, sometimes referred to as a Fagan inspection, is a peer review process where trained individuals evaluate a work product looking for defects.

## [Integration Testing](https://ray.run/glossary#integration-testing)

Performed after unit testing, integration testing identifies defects when integrated components or units interact.

## [Interface Testing](https://ray.run/glossary#interface-testing)

Interface testing ensures that two software components communicate correctly. Interfaces, including APIs and Web services, connect these components, and their testing is termed Interface Testing.

## [Internationalization Testing](https://ray.run/glossary#internationalization-testing)

Internationalization Testing, often abbreviated as "i18n testing," is a software testing process that verifies the adaptability of an application for use in different languages, regions, and cultures. The primary goal of internationalization testing is to ensure that the software's architecture is designed in a way that it can seamlessly handle multiple languages, character sets, and cultural conventions without necessitating changes to its core codebase.

## [Interoperability Testing](https://ray.run/glossary#interoperability-testing)

Interoperability Testing is a type of software testing that evaluates the capability of different systems, applications, or components to exchange and utilize information effectively, accurately, and consistently. The primary goal is to ensure that diverse software products and services can work seamlessly together in a given environment, be it within the same organization or across different entities. Interoperability Testing identifies integration issues, incompatibilities, or other hindrances that might prevent systems from interacting as intended. Such testing is crucial in environments where multiple vendors, platforms, or standards coexist and need to cooperate without causing disruptions or data discrepancies.

## [ISTQB](https://ray.run/glossary#istqb)

The International Software Testing Qualifications Board, a non-profit that certifies software testers.

## [Iteration](https://ray.run/glossary#iteration)

Iterative testing involves periodically updating a product based on previous feedback and then testing the changes against set benchmarks.

## [Jasmine](https://ray.run/glossary#jasmine) [🌐](https://jasmine.github.io/) [πŸ”—](https://en.wikipedia.org/wiki/Jasmine_(software))

Jasmine is an open-source testing framework for JavaScript. It is designed to be behavior-driven, allowing developers to write tests in a way that describes the expected behavior of the software in clear, human-readable terms. Jasmine provides functions to structure your tests, set up preconditions, and define assertions.

## [Jest](https://ray.run/glossary#jest) [πŸ”—](https://jestjs.io/)

Jest is a JavaScript unit testing framework by Meta. It's primarily used for writing unit tests to assess individual code segments.

## [Jira](https://ray.run/glossary#jira) [🌐](https://www.atlassian.com/software/jira) [πŸ”—](https://en.wikipedia.org/wiki/Jira_(software))

Jira is a popular software developed by Atlassian, primarily used for bug tracking, issue tracking, and project management. Originating as a tool for software development teams to track defects and tasks, Jira has since evolved to cater to various business functions with customizable workflows, real-time collaboration, and integration capabilities. It allows teams to create, prioritize, and assign work items, such as stories or bugs, and then track their progress through different stages. Jira supports Agile methodologies like Scrum and Kanban, offering features like boards, backlogs, and sprints. Its versatility and extensive plugin ecosystem make it suitable for a wide range of use cases beyond just software development.

## [JMeter](https://ray.run/glossary#jmeter) [🌐](https://jmeter.apache.org/) [πŸ”—](https://en.wikipedia.org/wiki/Apache_JMeter)

JMeter, officially known as Apache JMeter, is an open-source software application developed by the Apache Software Foundation. It is designed for load testing and performance measurement of web applications, but its capabilities extend beyond web protocols. JMeter allows users to simulate multiple users with concurrent threads, create a variety of requests to servers, and analyze the performance of applications under different load conditions.

Features of JMeter include its ability to simulate multiple users with concurrent threads, support for various protocols (including HTTP, FTP, JDBC, and more), and a graphical interface for designing and visualizing test plans. Its extensible nature allows developers and testers to integrate additional plugins or write custom code to enhance its functionality. With JMeter, organizations can validate the scalability, responsiveness, and reliability of their software applications and infrastructure.

## [JUnit Testing](https://ray.run/glossary#junit-testing)

JUnit is a Java testing framework enabling developers to craft and run automated tests. Whenever new code is incorporated, tests must be rerun to confirm the code's integrity.

## [Keyword Driven Testing](https://ray.run/glossary#keyword-driven-testing)

Keyword driven testing is a functional testing approach where test case design is separated from its execution. Keywords represent user actions on test objects, making test cases clearer and more maintainable.

## [Lighthouse](https://ray.run/glossary#lighthouse) [πŸ”—](https://en.wikipedia.org/wiki/Google_Lighthouse)

Lighthouse is an open-source, automated tool developed by Google for improving the quality of web pages. It provides audits for performance, accessibility, progressive web apps, SEO, and other aspects of web page quality. By running Lighthouse against a web page, developers and testers can obtain a set of actionable recommendations and insights that help in optimizing the user experience and overall effectiveness of the website.

## [Load Testing](https://ray.run/glossary#load-testing) [πŸ”—](https://en.wikipedia.org/wiki/Load_testing)

Load Testing evaluates how a system, software, or application behaves under multiple concurrent users. It mimics real-life conditions to determine system performance.

## [Localization Testing](https://ray.run/glossary#localization-testing)

Localization testing ensures a software product resonates culturally with users in a specific region, guaranteeing its usability in that locale.

## [Maintainability](https://ray.run/glossary#maintainability)

Maintainability measures how easily a system can be updated or modified. This attribute is crucial as software undergoes changes throughout its lifecycle.

## [Maintenance Testing](https://ray.run/glossary#maintenance-testing)

Maintenance testing helps identify, diagnose, and verify equipment problems, ensuring the effectiveness of repair measures.

## [Manual Testing](https://ray.run/glossary#manual-testing)

Manual testing is the process of manually checking software functionalities against expected outcomes.

## [Microservices Testing](https://ray.run/glossary#microservices-testing)

Microservices testing evaluates each individual microservice's functionality, ensuring they cohesively function as a unified application and are resilient to individual failures.

## [Mobile App Testing](https://ray.run/glossary#mobile-app-testing)

Mobile app testing involves verifying a mobile application's functionalities before its public release, ensuring both technical and business requirements are met.

## [Mobile Device Testing](https://ray.run/glossary#mobile-device-testing)

Mobile Device Testing assesses a device's features and qualities, ensuring it fulfills its intended purpose.

## [Mock Testing](https://ray.run/glossary#mock-testing)

Utilizes mock objects to mimic real objects in tests.

## [Monkey Testing](https://ray.run/glossary#monkey-testing)

Involves providing random inputs to a system to check if it crashes.

## [MTBF](https://ray.run/glossary#mtbf)

Mean Time Between Failures (MTBF) calculates the average duration between equipment failures, aiding in predicting future failures or replacement needs.

## [Mutation Testing](https://ray.run/glossary#mutation-testing)

Mutation testing evaluates the quality of software tests. It involves creating slight modifications in a program and checking if existing tests can detect these changes.

## [Negative Testing](https://ray.run/glossary#negative-testing)

Negative testing verifies an application's ability to handle incorrect input, comparing expected outcomes with actual results.

## [Node.js](https://ray.run/glossary#nodejs) [🌐](https://nodejs.org/) [πŸ”—](https://en.wikipedia.org/wiki/Node.js)

Node.js is an open-source, cross-platform JavaScript runtime environment that allows developers to execute JavaScript code server-side. Traditionally, JavaScript was primarily used for client-side scripting in web browsers. Node.js, however, enables JavaScript to be used for building scalable network applications outside the browser. Built on Chrome's V8 JavaScript engine, Node.js is designed for building fast and efficient web applications, especially I/O-bound applications.

## [Non-functional Testing](https://ray.run/glossary#non-functional-testing)

Non-functional testing evaluates software's non-functional attributes, such as usability and performance, ensuring the software's overall competence and effectiveness.

## [NUnit](https://ray.run/glossary#nunit) [🌐](https://nunit.org/) [πŸ”—](https://en.wikipedia.org/wiki/NUnit)

NUnit is an open-source unit testing framework for C# derived from JUnit. It facilitates writing and executing tests in .NET, with tools like NUnit-console.exe for batch test executions.

## [Operational Testing](https://ray.run/glossary#operational-testing)

Operational testing ensures a product or service meets its operational requirements, like security, performance, and maintainability. It's a subset of non-functional acceptance testing.

## [Orthogonal Array Testing](https://ray.run/glossary#orthogonal-array-testing)

A statistical approach to testing that maximizes coverage with minimal test cases.

## [OTT Testing](https://ray.run/glossary#ott-testing)

OTT testing assesses the quality of video, data, and voice services provided over the internet, ensuring customer experience, security, and connectivity across multiple components and infrastructure.

## [Page Object Model](https://ray.run/glossary#page-object-model)

The Page Object Model (POM) is a design pattern that consolidates web elements into an object repository, promoting code reusability and simplifying test maintenance.

## [Pair Testing](https://ray.run/glossary#pair-testing)

Pair testing is a collaborative approach where two team members, typically a tester and a developer or analyst, work together on testing efforts.

## [Parameterized Testing](https://ray.run/glossary#parameterized-testing)

Executing the same test using varied data sets.

## [Path Testing](https://ray.run/glossary#path-testing)

Assesses the distinct paths software can take during its execution.

## [Peer Testing](https://ray.run/glossary#peer-testing)

Peer testing involves team members evaluating each other's work, ensuring code consistency and pursuing shared goals.

## [Penetration Testing](https://ray.run/glossary#penetration-testing) [πŸ”—](https://en.wikipedia.org/wiki/Penetration_test)

Penetration Testing is a cybersecurity practice where trained professionals simulate cyberattacks on a system, network, or application to identify vulnerabilities that could be exploited by malicious actors. The primary objective of penetration testing is to discover security weaknesses from an attacker's perspective, thereby allowing organizations to better understand potential risks and take corrective actions before real-world malicious attacks occur. Penetration tests can be manual or automated and are often categorized by their scope and the knowledge level of the tester, such as black box (tester has limited knowledge about the system) or white box (tester has complete knowledge about the system).

## [Performance Indicator](https://ray.run/glossary#performance-indicator)

A performance indicator or KPI is a metric testers use to measure the efficacy and quality of their testing process.

## [Performance Testing](https://ray.run/glossary#performance-testing)

Performance testing gauges a product's capability and responsiveness under varying workloads, predicting how it would manage future demands.

## [Playwright](https://ray.run/glossary#playwright) [🌐](https://playwright.dev/)

Playwright is an open-source testing framework developed by Microsoft for end-to-end testing of web applications. It enables automated browser testing across multiple browsers, including Chrome, Firefox, and WebKit. With Playwright, testers and developers can write scripts that interact with web pages in a manner similar to real users, performing actions like clicking buttons, filling forms, and navigating between pages.

## [Postcondition](https://ray.run/glossary#postcondition)

Postcondition is a condition that must hold true after a segment of code runs, often verified through code predicates.

## [Postman](https://ray.run/glossary#postman) [🌐](https://www.postman.com/) [πŸ”—](https://en.wikipedia.org/wiki/Postman_(software))

Postman is a widely-used software tool that facilitates API (Application Programming Interface) development and testing. It offers a user-friendly interface that allows developers and testers to send requests to and receive responses from web services. With Postman, users can create, save, and organize HTTP requests, test APIs by sending various request types (GET, POST, PUT, DELETE, etc.), and inspect the responses. Additional features include the ability to automate tests, create mock servers, and document APIs. Postman also provides collaboration capabilities for teams, enabling them to share collections of requests, environments, and other data. Over time, Postman has evolved from a simple API testing tool into a comprehensive API development environment.

## [Priority](https://ray.run/glossary#priority)

Priority denotes the order or significance of an issue based on user needs, while severity indicates its system impact. Decisions on priority and severity may vary based on roles and processes.

## [QA Metrics](https://ray.run/glossary#qa-metrics)

QA metrics are tools that developers use to enhance their product quality by refining testing processes, helping in identifying or forecasting product flaws.

## [Quality Assurance](https://ray.run/glossary#quality-assurance) [πŸ”—](https://en.wikipedia.org/wiki/Quality_assurance)

Quality Assurance (QA) is a process ensuring the highest possible product or service quality. It emphasizes refining processes for consistent quality deliverables.

## [Quality Management](https://ray.run/glossary#quality-management)

Quality management ensures that an organization's products or services consistently meet a certain quality standard.

## [Regression Testing](https://ray.run/glossary#regression-testing) [πŸ”—](https://en.wikipedia.org/wiki/Regression_testing)

Regression testing checks if existing functionalities remain intact after new changes. It ensures that new additions don't disrupt existing software operations.

## [Release Testing](https://ray.run/glossary#release-testing)

Release testing evaluates a new software version to determine its readiness for release, examining its complete functionality.

## [Reliability Testing](https://ray.run/glossary#reliability-testing)

Reliability Testing assesses a software's capacity to function under specific conditions. It aims to identify issues related to the software's design and functionality.

## [Requirements Management Tool](https://ray.run/glossary#requirements-management-tool)

Requirements management tools oversee requirements, inform stakeholders about changes, and regulate new or adjusted requirements.

## [Responsive Design](https://ray.run/glossary#responsive-design)

Responsive design involves dynamically adjusting a website's appearance based on screen size and device orientation, ensuring compatibility between content and display.

## [Retesting](https://ray.run/glossary#retesting)

Retesting involves running tests on modified software to verify that changes haven't introduced new issues and that previously identified defects are resolved.

## [Reviewer](https://ray.run/glossary#reviewer)

Reviewers are experts who evaluate code to detect bugs, enhance quality, and guide developers. If code spans multiple domains, it should be assessed by several experts.

## [Risk-based Testing](https://ray.run/glossary#risk-based-testing)

Prioritizes testing based on potential risk of feature or function failure.

## [Robustness Testing](https://ray.run/glossary#robustness-testing) [πŸ”—](https://en.wikipedia.org/wiki/Robustness_testing)

Evaluates software's performance under extreme or unexpected inputs.

## [RUP](https://ray.run/glossary#rup) [πŸ”—](https://en.wikipedia.org/wiki/Rational_unified_process)

RUP, developed by Rational (an IBM division), is a software development process segmented into four phases: business modeling, analysis and design, implementation, testing, and deployment.

## [Sanity Testing](https://ray.run/glossary#sanity-testing)

Sanity testing, a subset of regression testing, ensures that code modifications function correctly. If issues arise, it halts the build.

## [Scalability Testing](https://ray.run/glossary#scalability-testing)

Scalability testing confirms if a software application can expand its non-functional capabilities. It often encompasses performance and reliability assessments.

## [Screenshot Testing](https://ray.run/glossary#screenshot-testing)

Screenshot testing automates the assessment of a web page or application's visual elements by comparing current visuals with baseline images, identifying visual regressions and other discrepancies.

## [Scrum](https://ray.run/glossary#scrum) [🌐](https://www.scrum.org/) [πŸ”—](https://en.wikipedia.org/wiki/Scrum_(software_development))

Scrum is an iterative and incremental Agile framework that facilitates collaboration among a cross-functional team to develop and deliver high-quality products. In Scrum, work is broken down into cycles called "sprints," typically lasting two to four weeks, during which a predetermined set of features are developed and tested. Key roles in Scrum include the Product Owner (who defines product requirements and prioritizes tasks), the Scrum Master (who ensures the team follows Scrum practices and principles), and the Development Team (which includes testers, developers, and other necessary roles). Regular ceremonies, such as Daily Stand-ups, Sprint Planning, Sprint Review, and Sprint Retrospective, ensure consistent communication and reflection on progress and processes. In the context of software testing, Scrum emphasizes the integration of testing throughout the sprint, ensuring that features are potentially shippable by the sprint's end.

## [Security Testing](https://ray.run/glossary#security-testing)

Security Testing aims to reveal potential vulnerabilities in a software system which may lead to information loss, revenue reduction, or reputational damage.

## [Selenium](https://ray.run/glossary#selenium) [🌐](https://www.selenium.dev/) [πŸ”—](https://en.wikipedia.org/wiki/Selenium_(software))

Selenium is an open-source software suite of browser automation tools primarily used for automating web browsers in the context of web application testing. It provides a way for developers and testers to write scripts in various programming languages (such as Java, C#, Python, and Ruby) to simulate user interactions with web pages and web applications.

## [Selenium IDE](https://ray.run/glossary#selenium-ide)

Selenium IDE enhances your testing environment with tools for logging in, item searching, and UI interactions.

## [Session-Based Testing](https://ray.run/glossary#session-based-testing) [πŸ”—](https://en.wikipedia.org/wiki/Session-based_testing)

An organized form of exploratory testing conducted in sessions.

## [Setup](https://ray.run/glossary#setup)

Arranging the necessary conditions for test cases to run.

## [Severity](https://ray.run/glossary#severity)

Severity gauges a defect's impact on an application's system. Defects with major system repercussions are assigned higher severity levels, typically determined by the Quality Assurance Engineer.

## [Shift-left Testing](https://ray.run/glossary#shift-left-testing)

Shift-left Testing integrates testing early in the software development process. By testing frequently and early, critical issues are identified before the deployment phase, promoting better code quality.

## [Software Development Life Cycle](https://ray.run/glossary#software-development-life-cycle)

The SDLC (Software Development Life Cycle) is a framework for software creation that encompasses planning, implementation, testing, and product release, ensuring quality, timely delivery, and adaptability to evolving user needs.

## [Software Quality](https://ray.run/glossary#software-quality)

Software quality reflects a software's capability to meet user requirements as documented in the SRS (Software Requirement Specifications). High-quality software aligns with user specifications and is maintainable, timely, and cost-effective.

## [Software Quality Management](https://ray.run/glossary#software-quality-management)

Software quality management focuses on ensuring that a software application meets quality benchmarks set by users and adheres to both regulatory and development standards.

## [Software Risk Analysis](https://ray.run/glossary#software-risk-analysis)

Software risk analysis inspects code violations that could compromise software stability, security, or performance.

## [Software Testing](https://ray.run/glossary#software-testing)

Software testing confirms that a software product or application functions correctly, achieves its intended goals, and is free of defects.

## [SQL](https://ray.run/glossary#sql) [πŸ”—](https://en.wikipedia.org/wiki/SQL)

SQL (Structured Query Language) is a standardized programming language specifically designed for managing and manipulating relational databases. SQL is used to perform tasks such as querying data, updating data, inserting data, and deleting data from a database. It also involves creating and modifying schemas (database structures) and controlling access to data. SQL provides a consistent interface to relational database management systems (RDBMS) and is supported by most modern RDBMS platforms like MySQL, PostgreSQL, SQL Server, Oracle, and many others. Through SQL, users can define, retrieve, and manipulate data within the database efficiently and effectively.

## [State Transition Testing](https://ray.run/glossary#state-transition-testing)

State Transition testing is a black-box testing method that observes system behavior for consecutive input conditions, using both positive and negative inputs.

## [Static Testing](https://ray.run/glossary#static-testing)

Static Testing involves early-cycle assessment of software artifacts like requirements, design documents, and source code without execution. This technique identifies defects and elevates product quality, and can be manual or automated.

## [STLC](https://ray.run/glossary#stlc)

The STLC (Software Testing Life Cycle) outlines the sequential tasks and stages in testing software. By systematically covering tasks like planning, requirements analysis, test design, execution, and reporting, the STLC aids in risk identification and ensures the software meets its objectives.

## [Stress Testing](https://ray.run/glossary#stress-testing)

Stress testing (Intrusive Testing) gauges the stability and resilience of a system, infrastructure, or entity under extreme conditions.

## [Structural Testing](https://ray.run/glossary#structural-testing)

Structural Testing evaluates software code structure. Combining white-box and glass box testing, it is primarily done by developers to ensure system integrity rather than functionality.

## [Swagger](https://ray.run/glossary#swagger) [🌐](https://swagger.io/)

Swagger, now often referred to as OpenAPI, is a set of tools and specifications for building, designing, and documenting RESTful APIs. It offers a standard, language-agnostic interface to RESTful APIs, allowing both humans and computers to understand the capabilities of a service without accessing its source code or further detailed documentation.

## [System Integration Testing](https://ray.run/glossary#system-integration-testing)

System integration testing is a technique to evaluate the entirety of a software application. It checks if both the functional and hardware components of the software harmonize.

## [System Testing](https://ray.run/glossary#system-testing)

System testing verifies interactions between software components in an integrated environment. Based on functional or design criteria, it helps identify shortcomings in the overall software functionality.

## [Test Approach](https://ray.run/glossary#test-approach)

A test approach outlines the strategy for how testing will be conducted. It specifies the tasks to achieve specific testing goals in a project.

## [Test Automation](https://ray.run/glossary#test-automation)

Test automation involves using tools to run tests and compare actual outcomes to expected results. These tools can streamline manual processes or integrate with continuous integration systems.

## [Test Case](https://ray.run/glossary#test-case) [πŸ”—](https://en.wikipedia.org/wiki/Test_case)

A test case is a detailed specification of test inputs, conditions, procedures, and expected outcomes. It ensures comprehensive program evaluation and identifies potential missed errors.

## [Test Case Management](https://ray.run/glossary#test-case-management)

Test Case Management, in the realm of software testing, refers to the process of documenting, organizing, tracking, and maintaining test cases throughout the software development lifecycle. It involves creating a structured repository of test cases, associating them with specific requirements or user stories, tracking their execution status, and managing their versions and iterations.

## [Test Class](https://ray.run/glossary#test-class)

Test classes are code fragments designed to validate the proper functioning of their associated Apex class.

## [Test Comparison](https://ray.run/glossary#test-comparison)

Test comparison refers to the process of contrasting data from previously executed tests.

## [Test Coverage](https://ray.run/glossary#test-coverage)

Test coverage measures the portion of a program’s code tested. It identifies which sections of code are exercised during test cases, ensuring thorough evaluation.

## [Test Data](https://ray.run/glossary#test-data)

Test data is the input provided to systems or software for testing purposes. Varying this data ensures comprehensive application evaluation and error handling.

## [Test Design Specification](https://ray.run/glossary#test-design-specification)

This is a detailed plan outlining the testing approach, features to test, and necessary requirements, cases, and procedures. It defines the testing success criteria.

## [Test Design Tool](https://ray.run/glossary#test-design-tool)

Test design tools aid in creating test cases or inputs. With an automated oracle, they can determine expected results, effectively generating test cases.

## [Test-Driven Development](https://ray.run/glossary#test-driven-development)

TDD (Test-Driven Development) is a development methodology that prioritizes writing tests before production code. The process involves writing a test, creating minimum code to pass it, and then refining the code.

## [Test Environment](https://ray.run/glossary#test-environment)

A test environment is a configured setting where tests are executed. It encompasses the necessary hardware, software, and network configurations tailored for the application under test.

## [Test Execution](https://ray.run/glossary#test-execution)

Test execution is the process of running software test cases to verify adherence to user requirements. It is pivotal in the software testing and development life cycles, starting after test planning.

## [Test Execution Automation](https://ray.run/glossary#test-execution-automation)

This involves using automation tools for test execution, either directly or via a management tool. The concluding test report offers a summarized account of the project’s testing.

## [Test Execution Schedule](https://ray.run/glossary#test-execution-schedule)

This schedule orchestrates sequential test steps, either at preset times or upon build completion triggers.

## [Test Execution Technique](https://ray.run/glossary#test-execution-technique)

These techniques enhance test execution through planning, strategies, and tactics, impacting how tests are conducted rather than the test run itself.

## [Test Execution Tool](https://ray.run/glossary#test-execution-tool)

Such tools evaluate software against specific test scenarios, comparing results to expected outcomes. Known also as capture/playback or record/playback tools, they document manual tests.

## [Test Harness](https://ray.run/glossary#test-harness)

A test harness is a suite of auxiliary tools, including stubs and drivers, used during testing. It utilizes a test library to run tests and generate reports.

## [Test Infrastructure](https://ray.run/glossary#test-infrastructure)

Test infrastructure encompasses both software and hardware required for smooth software application operations. It integrates activities and methods to optimize test speed, enabling quicker releases.

## [Test Log](https://ray.run/glossary#test-log)

A test log is an essential document detailing a test run’s summary, capturing both successful and failed tests. It provides insights into test operations, issues’ origins, and failure reasons, facilitating post-run analysis.

## [Test Management](https://ray.run/glossary#test-management)

Test management supervises the testing processes, documentation, and other software aspects, ensuring thorough testing and high-quality software delivery.

## [Test Observability](https://ray.run/glossary#test-observability)

Test observability denotes the capability to monitor a system during testing, analyzing its performance to pinpoint and rectify issues. It aggregates data like logs, metrics, and traces for insights and improvements.

## [Test Oracles](https://ray.run/glossary#test-oracles)

Mechanisms to determine if a test is successful or not.

## [Test Plan](https://ray.run/glossary#test-plan)

A document detailing the objectives and activities of testing. Prepared by the test lead, it communicates the testing approach, pass/fail criteria, stages, and other vital information to the project team and stakeholders. It also covers potential risks and contingency plans.

## [Test Policy](https://ray.run/glossary#test-policy)

A document set by senior management, outlining the principles and approaches the organization will adopt for testing its products.

## [Test Process](https://ray.run/glossary#test-process)

A systematic set of tasks and activities aimed at ensuring a software application adheres to its requirements and quality standards. This process includes test preparation, creation, execution, and reporting.

## [Test Process Improvement](https://ray.run/glossary#test-process-improvement)

An evaluation that compares an organization's testing activities to industry standards, offering an objective assessment of the organization's testing performance.

## [Test Pyramid](https://ray.run/glossary#test-pyramid)

A framework that assists developers in evaluating the mix of tests in an automated suite. It aims to expedite the detection of issues when changes are made to the codebase.

## [Test Report](https://ray.run/glossary#test-report)

A summary of testing objectives, activities, and results, designed to inform stakeholders about product quality and its readiness for release.

## [Test Runner](https://ray.run/glossary#test-runner)

A tool that automates the running of test cases and the collection of results, ensuring software functions as intended. Can be a GUI or command-line tool.

## [Test Scenario](https://ray.run/glossary#test-scenario)

Outlines a user action at a high level. It is broader than a detailed test case.

## [Test Script](https://ray.run/glossary#test-script)

Contains specific instructions for the system during a test.

## [Test Specification](https://ray.run/glossary#test-specification)

Generative drafts for test design, allowing for iterative test development. These guidelines help compare new test versions with previous ones.

## [Test Strategy](https://ray.run/glossary#test-strategy)

A document detailing the methodology adopted for software testing. It provides clarity on the testing approach tailored to achieve organizational testing objectives.

## [Test Stub](https://ray.run/glossary#test-stub)

Simulates the behavior of components that are absent.

## [Test Suite](https://ray.run/glossary#test-suite)

A collection of tests examining application features. Automated test suites execute these tests to provide pass/fail results. Automated suites offer repeatability and reduce human error.

## [Test Tool](https://ray.run/glossary#test-tool)

Test tools assist in various test activities, from planning to analysis. They identify input fields and their valid value ranges, often in tandem with test management or CASE tools.

## [Timebox Testing](https://ray.run/glossary#timebox-testing)

Conducting tests within a predefined time frame.

## [Top-Down Integration](https://ray.run/glossary#top-down-integration)

A testing method starting with high-level modules, progressing to lower-level ones. Stubs are used to simulate lower module responses until they are integrated.

## [Traceability Matrix](https://ray.run/glossary#traceability-matrix)

A table-type document tracking software requirements. It supports both forward and backward tracing of requirements to code and vice versa.

## [UI Testing](https://ray.run/glossary#ui-testing)

Evaluation of a web application's user interface to identify glitches and ensure it aligns with specified requirements.

## [Unit Test Framework](https://ray.run/glossary#unit-test-framework)

Tools designed for creating and executing unit tests, offering foundational structures for testing and reporting outcomes.

## [Unit Testing](https://ray.run/glossary#unit-testing)

The practice of testing individual software units or components to validate their functionality.

## [Usability Testing](https://ray.run/glossary#usability-testing) [πŸ”—](https://en.wikipedia.org/wiki/Usability_testing)

A qualitative research method providing insights into user interactions with software. It identifies usability issues and evaluates user-friendliness.

## [Use Case](https://ray.run/glossary#use-case)

A description detailing how a user interacts with a system. It forms a foundation for system development and tests.

## [Use Case Testing](https://ray.run/glossary#use-case-testing)

A testing approach examining all potential user interactions with software. It is especially useful for assessing error-handling and system robustness.

## [User Acceptance Testing](https://ray.run/glossary#user-acceptance-testing) [πŸ”—](https://en.wikipedia.org/wiki/Acceptance_testing)

A testing phase where the customer validates the software in its intended environment before release, ensuring alignment with their expectations.

## [Validation Testing](https://ray.run/glossary#validation-testing)

An evaluation of specific development stage requirements, ensuring the final product aligns with customer expectations.

## [Verification](https://ray.run/glossary#verification)

Activities focused on ensuring software correctly implements specific functionalities by comparing it against design specifications.

## [Visual Regression Testing](https://ray.run/glossary#visual-regression-testing)

Evaluation of the user interface after code changes. It checks for appearance and usability impacts, ensuring new changes don't disrupt existing functions.

## [V-Model](https://ray.run/glossary#v-model) [πŸ”—](https://en.wikipedia.org/wiki/V-model)

A development model that aligns with the product's validation phase.

## [Volume Testing](https://ray.run/glossary#volume-testing)

Challenges the system by subjecting it to large amounts of data.

## [Web Automation](https://ray.run/glossary#web-automation)

Programmatic operation of websites through test scripts and tools, replacing manual tasks to save time and reduce costs.

## [WebDriver](https://ray.run/glossary#webdriver) [🌐](https://www.selenium.dev/documentation/webdriver/)

An open-source framework for browser automation, enabling automated tests for web pages across various browsers and operating systems.

## [Web Performance Testing](https://ray.run/glossary#web-performance-testing)

Evaluation of a web application's speed, responsiveness, and stability under varying loads. It identifies and addresses potential bottlenecks.

## [Web Test Automation Tools](https://ray.run/glossary#web-test-automation-tools)

Tools aiding in product quality assurance. They support continuous integration, agile development, and DevOps amidst evolving demands.

## [Web Testing](https://ray.run/glossary#web-testing)

A crucial evaluation for web developers, assessing the functionality, usability, compatibility, security, and performance of web applications.

## [White Box Testing](https://ray.run/glossary#white-box-testing)

Evaluation of software's internal coding and architecture. It emphasizes security, input-output flow, design, and usability.

## [XPath Query](https://ray.run/glossary#xpath-query) [πŸ”—](https://en.wikipedia.org/wiki/XPath)

A language designed to extract and manipulate XML document data. Useful for retrieving XML data for content scanning.