Learn
Workshops
Learn systematic test design techniques
Learn to generate reliable tests using AI
Video Courses
Comprehensive video course on testing
Booklet
Long form essays about testing
Testimonials
Read what other people say about the workshop
For Businesses
Custom workshops for your team
Testing strategy and implementation consulting
Invite me to speak at your event or conference
Get help convincing your manager to pay for the workshop
Resources
Articles and insights about testing
Get in touch for questions or collaborations
The Art of Knowing Which TDD Tests to Delete
Written by Lucian Ghinda
Not all tests written during TDD belong in your codebase. While this might be familiar wisdom, it’s important to reiterate it here.
A commit is an artifact
I see a commit as an artifact of my work: it can deliver value or be examined to understand its functionality and how to fix issues. From this perspective, the tests included in the commit are essential for understanding how the code works and ensuring it performs as intended. In this case the tests included in a commit should be carefully selected.
TDD is a practice not a test design technique
TDD is a software development practice, not a testing or test design technique. Tests written during TDD serve different purposes depending on when they are written.
At the end of this practice, you will have tests that drove your development and a piece of business logic that satisfies some functional or non-functional requirements.
During your TDD process, you might have written a range of test types at various levels:
- You may have written some functional unit testing
- You may have written some non-functional integration testing
- You may have written some functional system or acceptance testing
Keep the tests that satisfy these criteria
When preparing to commit, you have to remove the tests that were driving your process and keep the ones that satisfy at a minimum the following criteria:
- They verify that the implementation satisfies the requirements. Ask: “Does this test ensure my code accomplishes what it’s supposed to do based on the specifications?”
- They document code behaviou to help maintainability (including debugging, support, and changing. Consider: “Will this test help a developer understand the code or diagnose issues in the future?”
- They check that a specific shape, attribute, or form of the code or architecture ei important for the component you are developing , such astesting that a new object inheriting from a parent needs to define a specific method). Ask: “Is this structural test crucial for ensuring the component functions correctly in its intended context?”
For example, a test that verifies a method outputs correct results based on given inputs would meet these criteria, as it ensures the functionality aligns with the requirements.If you have tests that guided your code design but offer no further benefit, you should delete them.
Example of tests to remove
Here are some categories of tests that could be created during a TDD session and that you might consider removing. This is not an exhaustive list, and depending on your context, you might want to keep some, for example, to pin specific implementation details for performance or security reasons.
There are instances when keeping such tests is justified, such as when dealing with legacy code where comprehensive documentation is lacking. Compliance may also require specific tests to meet regulatory standards or to demonstrate that certain conditions have been met. Additionally, in scenarios where performance is critical, keeping some of these tests can help ensure that the system performs optimally under various conditions.
Scaffolding tests used purely for design exploration:
- Existence/scaffolding tests (from the category of make it work, like testing that an object exists)
- Tests written to explore different API designs before settling on the final interface
- Experimental tests for approaches you ultimately rejected
- “Spike” tests used to understand how a third-party library works
Tests that verify implementation details rather than behaviour:
- Tests that check private method implementations
- Tests that verify the exact sequence of internal method calls
- Tests that assert on specific data structure choices (e.g., “must use a HashMap”) when the requirement is just “fast lookups”
Over-specified tests that constrain refactoring:
- Tests that verify exact error message wording when the requirement is just “must fail gracefully”
- Tests that assert on specific class structures when the requirement is about behaviour
- Tests that mock every dependency when testing at a higher level would be more valuable
Duplicate tests at the wrong abstraction level:
- Low-level unit tests that duplicate what acceptance tests already verify
- Tests for trivial getters/setters with no business logic
- Tests for framework behavior rather than your code’s behavior
The key question to ask:
If this test breaks, does it tell me something valuable broke, or just that I changed how I implemented something?
Next Workshop
10 October 2025 - 15:00 UTC
6 Going
JAMIE SCHEMBRI, Christopher Barton and 4 others
9 spots remaining
#goodenoughtesting #subscribe #email
Get free samples and be notified when a new workshop is scheduled
You might get sometimes an weekly/monthly emails with updates, testing tips and
articles.
I will share early bird prices and discounts with you when the courses are ready.