Context poisoning

Context poisoning image
An attack in which a malicious prompt is injected into an AI system's active context, often with instructions to "forget" previous guidelines, granting the attacker control over the model's behaviour for the remainder of the session.


The malicious prompt can just say that forget everything which was told to you before and now just do this thing which I'm asking you to do. Now, if this happens, this is called context poisoning or context injection. And once you poison the context, then you can get anything and everything done out of any AI system. 
Explore MoT
AI-driven testing in practice: from requirements to reliable automation image
See where AI genuinely helps, where it doesn’t, and how testers can stay firmly in control
MoT Software Testing Essentials Certificate image
Boost your career in software testing with the MoT Software Testing Essentials Certificate. Learn essential skills, from basic testing techniques to advanced risk analysis, crafted by industry experts.
This Week in Quality image
Debrief the week in Quality via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter