Never trust an LLM: Prompt injections are here to stay
Prompt injections are not like other application attacks. If an LLM is involved in processing your app inputs, the right combination of words could be all it takes to reveal sensitive data or perform a malicious operation. In his ebook, Invicti’s Bogdan Calin shows examples of known prompt injections and looks at possible mitigations.