An LLM is an AI system which is basically a massive, high-speed pattern-recognition engine trained on a mountain of text. It’s not "thinking" in the way you or I do. It tries to produce what it thinks the answer would look like based on everything it’s ever read and the instructions and context you give it. For us, it’s like having a pair programmer who has read every technical manual and Stack Overflow thread in existence, but sometimes forgets to check whether the advice is actually useful, practical, or just 'out there'.Â
Developers use LLMs primarily as a productivity multiplier. It’s brilliant at the "boilerplate" stuff that usually bores us to tears. For example, you can ask an LLM to "Write a Python function to sort a list of dictionaries by a specific key," and it’ll spit out a working version in seconds. But it can also "hallucinate" (make things up). It might suggest a library that doesn't exist or uses a deprecated method with security vulnerabilities. You still need to be the "Adult in the room" to review the code. Rahul wrote a great piece on this subject, "Human in the loop vs AI in the loop."Â
For a Quality Engineer, or tester of any kind, an LLM can be a powerful tool for generating test ideas and data, as long as you don't let it drive the bus. For example, you could feed a set of requirements into an LLM and ask, "What are ten edge cases for this login feature?" It might suggest things you hadn't considered, like handling emojis in usernames or SQL injection attempts. But if you use it to generate your automated tests, it might create "brittle" code that looks right but fails the moment your UI changes.
The biggest risk with LLMs, as with many things, is a loss of context. The model doesn't know your specific business logic, your security constraints, or your "unwritten" team rules, so be careful how you use it.Â
Use it to bounce ideas off, draft documentation, or create code snippets. It’s an assistant, not a replacement for the critical thinking and scepticism that a human brings to the party. Just because the LLM gave you an answer doesn't mean it is right, or that you can stop being a thought worker.