Spelunking in Comments and Documentation for Security Footguns

Join us as we explore seemingly safe but deceptively tricky ground in Elixir, Python, and the Golang standard library. We cover officially documented, or at least previously discussed, code functionality that could unexpectedly introduce vulnerabilities. Well-documented behavior is not always what it appears!

Vulnerabilities in Open Source C2 Frameworks

Our latest post focuses on the command and control (C2) software frameworks used by professional offensive security red teams and criminal organizations alike. We dived into the source code of multiple high-profile, open-source C2s and discovered vulnerabilities in most of them. In this post, we provide a brief overview of C2 concepts, review the details of the frameworks’ identified vulnerabilities (with nifty reproduction gifs included!), and conclude with some final thoughts about the current state of the C2 landscape and what future developments might look like.

Coverage Guided Fuzzing – Extending Instrumentation to Hunt Down Bugs Faster!

In our latest blog post, we introduce coverage-guided fuzzing with a brief description of fundamentals and a demonstration of how modifying program instrumentation can be used to more easily track down the source of vulnerabilities and identify interesting fuzzing paths.

Discovering Deserialization Gadget Chains in Rubyland

Finding deserialization functions accepting user input can be exciting, but what’s your plan if well-known gadget chains aren’t an option for exploitation? In this post, we explore the process of building a custom gadget chain to exploit deserialization vulnerabilities in Ruby.

Improving LLM Security Against Prompt Injection: AppSec Guidance For Pentesters and Developers – Part 2

In Part 2 of our series focusing on improving LLM security against prompt injection we’re doing a deeper dive into transformers, attention, and how these topics play a role in prompt injection attacks. This post aims to provide more under-the-hood context about why prompt injection attacks are effective, and why they’re so difficult to mitigate.