tests
LLMs
AI

Tests as the line of defense for AI generated code

Written by Lucian Ghinda

Being good at testing is a skill developers should sharpen even more when using LLMs to generate code. Especially if you:

  1. Use the LLM in agentic mode.
  2. Use LLMs to solve hard problems or implement complicated business logic.

Your test cases are the best line of defense to ensure the generated code works as intended. Good tests should be fast, isolated, and intention-revealing. They should quickly identify faults in the code without dependencies that can skew results and clearly communicate what they are testing. Here’s a brief checklist to determine if your tests are effective in defending against faulty LLM outputs:

  1. Is the test fast enough to run frequently?

  2. Does it function independently without impacting or being affected by other tests?

  3. Does it clearly state the intention of what it is verifying?

  4. Does it cover critical business requirements or edge cases?

  5. Is it easy to understand and maintain over the long term?

Always review the code generated by an LLM. While writing good test cases is essential, it does not replace the need for a thorough code review. Targeted tests can effectively guide your LLMs to create better solutions with fewer chances of dangerous bugs.

In the end, you still have to assume and own the solution created with LLMs.

Next Workshop

Reliable Test Case Generation With AI

31 October 2025 - 16:00 UTC

3 hours
online
FG
CB
KL
MB
AM

9 Going

Florent Guilleux, Christian Billen and 7 others

3 spots remaining

Learn more

#goodenoughtesting #subscribe #email

Get free samples and be notified when a new workshop is scheduled

You might get sometimes an weekly/monthly emails with updates, testing tips and articles.
I will share early bird prices and discounts with you when the courses are ready.