Similar Items: ARGUS: Defending LLM Agents Against Context-Aware Prompt Injection
- Tailored Prompts, Targeted Protection: Vulnerability-Specific LLM Analysis for Smart Contracts
- FlashRT: Towards Computationally and Memory Efficient Red-Teaming for Prompt Injection and Knowledge Corruption
- MAGE: Safeguarding LLM Agents against Long-Horizon Threats via Shadow Memory
- Safety Anchor: Defending Harmful Fine-tuning via Geometric Bottlenecks
- PragLocker: Protecting Agent Intellectual Property in Untrusted Deployments via Non-Portable Prompts
- LoopTrap: Termination Poisoning Attacks on LLM Agents