An attack in which a malicious prompt is injected into an AI system's active context, often with instructions to "forget" previous guidelines, granting the attacker control over the model's behaviour for the remainder of the session.
The malicious prompt can just say that forget everything which was told to you before and now just do this thing which I'm asking you to do. Now, if this happens, this is called context poisoning or context injection. And once you poison the context, then you can get anything and everything done out of any AI system.