Similar Items: Backdoor Mitigation in Object Detection via Adversarial Fine-Tuning
- Secret Stealing Attacks on Local LLM Fine-Tuning through Supply-Chain Model Code Backdoors
- Stateful Agent Backdoor
- Detecting Adversarial Data via Provable Adversarial Noise Amplification
- Safety Anchor: Defending Harmful Fine-tuning via Geometric Bottlenecks
- PACZero: PAC-Private Fine-Tuning of Language Models via Sign Quantization
- Cross-Modal Backdoors in Multimodal Large Language Models