Talk Description
How can we move faster from requirements to test cases using AI—without losing visibility or control?
In this hands-on session, we’ll build a PRD-to-Test-Case Generator using RAG (Retrieval-Augmented Generation) and AI agents. The goal is to show how QA workflows can be augmented with AI while still maintaining traceability and confidence in outputs.
This workshop is designed to be practical and interactive, giving you a real feel for “vibe coding” — where you collaborate with AI to build working solutions quickly.
What You’ll Learn
- How to use RAG pipelines for QA workflows
- Turning requirement docs (PRDs) into structured test cases
- Designing agent-based systems for test generation
- Maintaining observability with LangSmith
- How context improves AI output quality over time
- Practical patterns you can apply immediately in your QE workflows
Workshop Flow
We’ll build and walk through a working system step by step:
- RAG Agent ingests reference docs and requirements
- Test Generator Agent retrieves context and generates test cases
- Outputs are traced in LangSmith and stored
- Generated test cases are added back to the system
- Process repeats with improving context
Agents & Orchestration
- RAG Agent: Handles document ingestion, storage, and retrieval using ChromaDB
- Test Generator Agent: Uses retrieved context to generate test cases with GPT-4o
- LangGraph: The orchestration engine that manages the agentic loop — deciding when to call tools, running them in parallel, and knowing when the job is done.
🚀 15+ years breaking software before it breaks users.
I'm a Senior SDET who's evolved from mobile test automation expert to AI-powered quality architect — having shipped quality at DocuSign, Abbott, Lyft, Cisco, and Pearson.
🚀 15+ years breaking software before it breaks users.
I'm a Senior SDET who's evolved from mobile test automation expert to AI-powered quality architect — having shipped quality at DocuSign, Abbott, Lyft, Cisco, and Pearson.