-
Leo Tsaousis
- 6 Jan 2025
CloudWatch Dashboard (Over)Sharing
A security vulnerability was discovered in AWS CloudWatch dashboard sharing that allowed unauthorized viewers to access EC2 tags. The issue stemmed from a misconfiguration in Cognito Identity Pools' authentication flow, specifically an undefined setting for the Classic authentication flow. By exploiting this misconfiguration, attackers could retrieve sensitive account information through a multi-step authentication process.
-
Donato Capitella
- 6 Dec 2024
Multi-Chain Prompt Injection Attacks
Multi-chain prompt injection is a novel attack technique targeting complex LLM applications with multiple chained language models. The technique exploits interactions between LLM chains to bypass safeguards and propagate malicious content through entire systems. A sample workout planner application demonstrates how attackers can manipulate multi-chain LLM workflows to inject and propagate adversarial prompts across different processing stages.
-
Donato Capitella
- 21 Oct 2024
Fine-Tuning LLMs to Resist Indirect Prompt Injection Attacks
A fine-tuning approach was developed to enhance Llama3-8B's resistance to indirect prompt injection attacks. The method uses data delimiters in the system prompt to help the model ignore malicious instructions within user-provided content. The fine-tuned model achieved a 100% pass rate in resisting tested prompt injection attacks. The model and training scripts have been publicly released.
-
Donato Capitella
- 4 Jun 2024
When your AI Assistant has an evil twin
An indirect prompt injection attack against Google Gemini Advanced demonstrates how malicious emails can manipulate the AI assistant into displaying social engineering messages. The attack tricks users into revealing confidential information by exploiting Gemini's email summarization capabilities. The vulnerability highlights potential security risks in AI assistants with data access capabilities.
-
Tom Taylor-Maclean
- 22 May 2024
Generative AI - An Attacker's View
Generative AI is increasingly being used by threat actors for cyber attacks. Attackers can leverage AI for reconnaissance, gathering personal information quickly and creating targeted phishing emails. The technology enables sophisticated social engineering through deepfakes, voice cloning, and malicious code generation, with potential for more advanced attacks in the near future.
- 12 Apr 2024
Exploiting the AWS Client VPN on macOS for Local Privilege Escalation (CVE-2024-30165)
A local privilege escalation vulnerability was discovered in AWS Client VPN 3.9.0 for macOS. The flaw stemmed from an XPC service lacking proper client verification, allowing an attacker to uninstall the application and execute malicious scripts with root privileges. The vulnerability enabled unauthorized root-level actions through the XPC service's insufficient validation of message origins.
- 10 Apr 2024
Abusing search permissions on Docker directories for privilege escalation
A privilege escalation vulnerability was discovered in Docker environments where the /var/lib/docker directory has search permissions for other users. Low-privileged attackers can access container filesystems by exploiting these permissions. By modifying container startup scripts and leveraging host reboot capabilities, attackers can potentially gain root access on the host system.
-
Benjamin Hull
- 8 Apr 2024
Domain-specific prompt injection detection
A domain-specific machine learning approach was developed to detect prompt injection attacks in job application contexts using a fine-tuned DistilBERT classifier. The model was trained on a custom dataset of job applications and prompt injection examples, achieving approximately 80% accuracy in identifying potential injection attempts. The research highlights the challenges of detecting prompt injection in large language models and emphasizes that such detection methods are just one part of a comprehensive security strategy.
- 29 Feb 2024
Binary Exploitation for SPECIAL Occasions: Privilege Escalation in z/OS
This article explores a privilege escalation technique in z/OS mainframe systems by manipulating the Accessor Environment Element (ACEE). The technique involves creating an APF-authorized assembly program that modifies user flags in memory to gain SPECIAL privileges. The exploit demonstrates how low-level memory structures and system internals can be leveraged to escalate system access.
- 29 Feb 2024
The Hidden Depths of Mainframe Application Testing: More Than (Green) Screen-Deep
Mainframe application security testing requires looking beyond surface-level "green screen" interfaces. The article explores three key vulnerability areas in mainframe environments: application breakouts that allow unauthorized transaction access, surrogate chaining that can bypass environment segregation controls, and downstream misconfigurations in database and system components. Comprehensive security assessments must take a holistic approach to mainframe application testing.
-
Donato Capitella
- 21 Feb 2024
Should you let ChatGPT control your browser?
This article explores the security risks of granting Large Language Models (LLMs) control over web browsers. Two attack scenarios demonstrate how prompt injection vulnerabilities can be exploited to hijack browser agents and perform malicious actions. The article highlights critical security challenges in LLM-driven browser automation and proposes potential defense strategies.
-
Alex Pettifer
Miłosz Gaczkowski
- 6 Feb 2024
eLinkSmart - Unlocking Bluetooth LE padlocks with polite requests
A critical security analysis of eLinkSmart Bluetooth padlocks revealed multiple severe vulnerabilities. The locks have hardcoded encryption keys, an insecure web API with SQL injection flaws, and weak authentication controls. These vulnerabilities allow attackers to unlock any lock within Bluetooth range and access sensitive user information.