Prompt injection is a security attack that happens when someone intentionally manipulates the input to a Generative AI system like a chatbot or code generator to make it behave in ways the designer didn’t intend.
It’s done by crafting inputs to Gen AI systems in order to confuse, hijack, or redirect the AI’s response by messing with its underlying structure.
For software testers, it's a way to test for input attacks on LLM-based systems. Just like a SQL injection or XSS, but here the payload is language and words designed to interfere with the model or system prompts.
It’s done by crafting inputs to Gen AI systems in order to confuse, hijack, or redirect the AI’s response by messing with its underlying structure.
For software testers, it's a way to test for input attacks on LLM-based systems. Just like a SQL injection or XSS, but here the payload is language and words designed to interfere with the model or system prompts.