Improving LLM Security Against Prompt Injection: AppSec Guidance For Pentesters and Developers

Many developers are leveraging LLMs without taking advantage of system roles, making their applications vulnerable by design. Security researches may be missing severe issues with prompt design and implementation by not testing the LLM APIs and focusing on the web user interfaces of LLM providers. Our latest blog post provides prescriptive advice to LLM application developers to help them minimize the security risk of their applications. It also helps security researchers focus on the issues that are important to developers of LLM applications. This post is the first in a series of two, where in future posts we’ll cover the concept of attention in transformer models.

IncludeSec’s free training in Buenos Aries for our Argentine hacker friends.

One of the things that has always been important in IncludeSec’s progress as a company is finding the best talent for the task at hand. We decided early on that if the best Python hacker in the world was not in the US then we would go find that person and work with them! Or … Read more