Improving LLM Security Against Prompt Injection: AppSec Guidance For Pentesters and Developers – Part 2

In Part 2 of our series focusing on improving LLM security against prompt injection we’re doing a deeper dive into transformers, attention, and how these topics play a role in prompt injection attacks. This post aims to provide more under-the-hood context about why prompt injection attacks are effective, and why they’re so difficult to mitigate.