Key terms and concepts in software testing, explained clearly.
An automated sequence of steps — build, test, lint, and deploy — that runs every time code is pushed to a repository.
A development practice where developers frequently merge code changes into a shared repository, with each merge automatically built and tested.
Common Test Report Format — a modern, JSON-based specification for test results designed to be consistent, extensible, and easy to parse.
A test that validates an entire user workflow from start to finish, exercising the full application stack including the UI, APIs, and databases.
A test that produces inconsistent results — sometimes passing, sometimes failing — without any changes to the code under test.
A composite metric that summarizes the overall reliability and performance of a test suite into a single, easy-to-interpret value.
A test that verifies the interaction between two or more components, modules, or services to ensure they work correctly together.
A widely adopted XML format for reporting test results, originally created for JUnit but now supported by virtually every test framework and CI system.
The practice of running multiple tests or test suites simultaneously across threads, processes, or machines to reduce total execution time.
A test designed to verify that previously working functionality has not been broken by recent code changes.
A quick, high-level test that verifies the most critical functionality of an application works before running the full test suite.
A single, atomic verification that checks whether a specific behavior or condition in the software works as expected.
A metric that measures the percentage of source code executed by automated tests, indicating how thoroughly the codebase is tested.
The time a test or test suite takes to execute, a critical metric for maintaining fast CI feedback loops.
The practice of ensuring each test runs independently, with no shared state or side effects that could cause other tests to pass or fail incorrectly.
A testing strategy model that recommends having many fast unit tests, fewer integration tests, and even fewer slow end-to-end tests.
A structured output from a test run that summarizes which tests passed, failed, or were skipped, along with execution details.
The practice of automatically re-running a failed test one or more times to determine whether the failure is persistent or intermittent.
A collection of test cases grouped together to validate a specific feature, module, or entire application.
A fast, isolated test that verifies the correctness of a single function, method, or small piece of logic independently from the rest of the system.