-
Mohit Gupta
Tom Taylor-MacLean
- 19 Aug 2025
Another ECS Privilege Escalation Path
ECS has a range of known privilege escalation vectors. We discovered another which relies on using functionality designed for the ECS agent to self-register a compromised EC2 and override a task definition. A variant of this for ECS on Fargate is also discussed.
-
Donato Capitella
- 14 Aug 2025
Design Patterns to Secure LLM Agents In Action
A practical walkthrough of six security design patterns for building resilient LLM agents. We explore how structural controls, not just model-level defenses, can mitigate prompt injection, and introduce a hands-on code repository to see these patterns in action.
-
Thomas Byrne
- 6 Aug 2025
Breaking Down Azure DevOps: Techniques for Extracting Pipeline Credentials
Workload Identity Federation - is it all it makes out to be? Does it *really* prevent attackers from extracting credentials from pipeline identities that use modern authentication technique?
-
Tom Taylor-MacLean
- 30 Jul 2025
Elevating Attack Path Mapping to the Clouds
An introduction to Reversec's Cloud Attack Path Mapping (APM) service, looking at where it originated from, why it works and how it compares to other styles of testing. After looking at the current state of testing, consideration is given to how effective our future-looking service can be for both cloud-native and hybrid environments. Examples are given of previous success stories where interesting, and sometimes unusual, results have occurred!
-
David Alves
- 24 Jul 2025
Mapping Oracle’s Forgotten Pathways: Lateral Movement with ORACrawl
This article explores lateral movement in Oracle databases using chained database links - an area with little prior research or tooling. It introduces ORACrawl, a tool that automates discovery and query execution across multiple database link paths, bypassing Oracle’s constraints and enabling deeper security assessments.
-
Leonidas Tsaousis
- 15 Jul 2025
High-Profile Cloud Privesc
Revisiting PowerShell Profile Tricks in Entra Environments
-
TERE Team
- 9 Jul 2025
AtivarSpy - Swimming With Delphins
A piece of undocumented Delphi malware was analysed to understand its functionality. In doing so, some interesting techniques were identified, alongside poor coding practices and potential vulnerabilities in the backend malware server.
-
Donato Capitella
- 28 Jan 2025
Spikee: Testing LLM Applications for Prompt Injection
A step-by-step guide using the open-source tool spikee (v0.2) for prompt injection testing in LLM applications. Explores a webmail summarization case study, covering custom dataset creation, testing with Burp Suite and spikee's custom targets, interpreting results, and noting key updates from v0.1 to v0.2 like the Judge system and dynamic attacks.
-
Leonidas Tsaousis
- 6 Jan 2025
CloudWatch Dashboard (Over)Sharing
A security vulnerability was discovered in AWS CloudWatch dashboard sharing that allowed unauthorized viewers to access EC2 tags. The issue stemmed from a misconfiguration in Cognito Identity Pools' authentication flow, specifically an undefined setting for the Classic authentication flow. By exploiting this misconfiguration, attackers could retrieve sensitive account information through a multi-step authentication process.
-
Donato Capitella
- 6 Dec 2024
Multi-Chain Prompt Injection Attacks
Multi-chain prompt injection is a novel attack technique targeting complex LLM applications with multiple chained language models. The technique exploits interactions between LLM chains to bypass safeguards and propagate malicious content through entire systems. A sample workout planner application demonstrates how attackers can manipulate multi-chain LLM workflows to inject and propagate adversarial prompts across different processing stages.
-
Donato Capitella
Lily Bradshaw
- 21 Oct 2024
Fine-Tuning LLMs to Resist Indirect Prompt Injection Attacks
A fine-tuning approach was developed to enhance Llama3-8B's resistance to indirect prompt injection attacks. The method uses data delimiters in the system prompt to help the model ignore malicious instructions within user-provided content. The fine-tuned model achieved a 100% pass rate in resisting tested prompt injection attacks. The model and training scripts have been publicly released.
-
Donato Capitella
- 4 Jun 2024
When your AI Assistant has an evil twin
An indirect prompt injection attack against Google Gemini Advanced demonstrates how malicious emails can manipulate the AI assistant into displaying social engineering messages. The attack tricks users into revealing confidential information by exploiting Gemini's email summarization capabilities. The vulnerability highlights potential security risks in AI assistants with data access capabilities.