Reliable Test Case Generation with AI
Join a practical 2 hours live online workshop that teaches you how to guide LLMs like Claude Code, Codex CLI, Gemini CLI, and others to write smarter, more effective Ruby tests using proven design techniques.

Led by Lucian Ghinda
I’ve spent 15+ years in Ruby: building, teaching, and mentoring. I love exploring how testing can make code and teams better and faster.
Objectives
What are the objectives of this workshop?
Many developers are using AI tools like ChatGPT, Cursor, Gemini, or Claude to generate tests. However, these AI-generated tests often prove to be incorrect, superficial, or overlook important aspects of the code. This workshop will help you answer:
Without proper guidance, LLMs tend to make guesses about which tests to create based on patterns and biases, rather than following established testing principles. This workshop will teach you how to move beyond luck and employ test design techniques to refine your prompts, yielding more accurate and reliable test cases from LLMs.
Intended audience
This workshop is designed for developers who are already using or considering using AI tools for test generation.
- The main audience is Ruby developers, Rails engineers, and QA engineers who want to leverage AI tools more effectively.
- If you are already using ChatGPT, Claude, Cursor, or other LLMs to generate tests but find the results inconsistent or superficial, this workshop will teach you how to guide these tools systematically.
- If you are curious about AI-assisted testing but skeptical about the quality, this workshop will show you how to move beyond generic AI outputs to create meaningful, reliable test cases.
- If you want to speed up your testing workflow without sacrificing quality, you'll learn practical prompting techniques that yield better results consistently.
- If you are a team lead or engineering manager looking to standardize AI-assisted testing practices across your team, this workshop provides a repeatable framework that anyone can apply.
- Whether you are curious, skeptical, or already integrating LLMs into your workflow, this workshop will provide insights on how to improve their performance for Ruby and Rails testing.
Learn
What do you learn?
When working with Ruby and Rails code, you will learn how to guide LLMs to:
Prompt LLMs with structured guidance instead of generic requests for "write tests".
Apply test design techniques to create comprehensive prompts that yield reliable results.
Generate high-quality, systematic test cases that cover edge cases and business logic effectively.
Compare outputs from different LLMs and understand how structured prompts improve consistency.
You will learn to incorporate these test design techniques into your AI prompts:

- Equivalence Partitioning guide LLMs to identify groups of similar inputs and generate representative test cases for each partition
- Boundary Value Analysis prompt AI tools to focus on edge cases and boundary conditions that often reveal bugs
- Decision Tables structure prompts to ensure LLMs cover all combinations of boolean conditions in your Ruby code
- State Transition Testing direct AI tools to test all possible state changes and transitions in your Rails applications
You will see real examples from Ruby and Rails code, compare outputs from different LLMs, and leave with practical tips to enhance your AI-assisted testing, making it more systematic, focused, and repeatable.
Key Takeaways
What you'll walk away with
- Better prompts for LLMs like ChatGPT, Claude, or DeepSeek to generate high-quality, meaningful test cases for your Ruby and Rails code.
- Understanding why generic AI-generated tests fail and how test design techniques can address this issue systematically.
- Four essential test design techniques (Boundary Value Analysis, Equivalence Partitioning, Decision Tables, and State Transition Testing) and how to incorporate them into your AI prompts.
- Real Ruby and Rails examples showing these techniques in action, plus comparisons of how different LLMs respond to structured prompts.
- A repeatable process for prompting LLMs to generate effective, systematic, and reliable tests that you can apply immediately to your projects.
Register
Join the next live session
31 October 2025 - 16:00 UTC
2 hours
online
4 Going
Florent Guilleux, Christian Billen and 2 others
8 spots remaining
Workshop Details
31 October 2025
16:00 UTC
(17:00 CEST/09:00 PDT/12:00 EDT)
This live online workshop runs for 2 hours and walks through the exact prompting frameworks we use to guide LLMs to generate reliable test cases.
Seats are limited to 12 participants so we can keep exercises interactive. There is a minimum of 5 participants.
The price for this edition is USD 100 per person.
You'll get access to the recording, the prompt templates, and comparison worksheets we use during the session.
Planning to expense the workshop? Grab a reimbursement request template.
If the time does not work for you, fill out this short survey so I can plan future sessions that work better for you.
Testimonials
What experts are saying about the workshop
"At the moment, LLMs might go unchecked and suggest tests that are either redundant or that miss important corner cases. During the workshop, Lucian introduced useful techniques to help developers and AI reason about our tests and the problem space."

Instructor
Why learn with me?
I have been working with Ruby since 2006/2007 and am a certified ISTQB Trainer. Since 2013, I have led testing workshops and training sessions, helping developers bring structure to their testing processes without slowing down their work.
My approach is practical and balanced: I experiment with LLMs in real projects to understand where they add value, where they fall short, and how to guide them effectively—especially in Ruby and Rails environments where speed and clarity are crucial.
This workshop is grounded in my day-to-day experience of helping teams use LLMs to generate tests that are genuinely useful, moving beyond hype to practical, actionable techniques that improve your development workflow.
newsletter
Subscribe
Subscribe to get access to free content and be notified when the next workshop is scheduled.
Usually the workshops are fully booked and it is best to register as early as possible.
FAQ
Frequently asked questions
-
Will this workshop be recorded?
Yes, this workshop will be recorded and available for download to the participants of the workshop. The recording will be accessible in the participants' area on this website where you will get access with the email that you used to register.
-
What programming language do I need to know to attend the workshop?
You need to know Ruby at least at a Junior level. This specific training will use examples from open source Ruby on Rails repositories, so familiarity with Ruby syntax and basic Rails concepts is essential.
For example, you should be comfortable reading and understanding code like this, where the valid? method returns true or false based on the account age:
class Validator def initialize(account) @account = account end def valid? return false if @account.age < 18 true end end
If you can comfortably read and understand Ruby code at this level, you'll be able to follow all the workshop examples and exercises.
-
Are there any prerequisites or software needed?
We will not execute code during the workshop as we are going to focus on understanding the fundamentals of testing and how to write effective tests.
What's more important is to make sure you have Zoom installed and your microhone, audio and video settings working. -
Is prior testing experience required?
There is no prior testing experience required. This workshop will teach you how to design test cases and how to cover requirements or code with efficient and effective tests.
-
Will this workshop teach me TDD?
Test-Driven Development is a development process where you write tests before writing the actual code. In this workshop, we will focus on test design: identifying test conditions, covering business logic or code with tests, and learning how to reduce the number of tests while maintaining high coverage.
-
How long is the workshop, and what is the schedule?
The workshop is 3 hours long with a 15 minutes break approximately in the middle. Each session has a specific starting hour that is presented on the event page.