Similar Items: Detecting Hallucinations in Large Language Models via Internal Attention Divergence Signals
- The First Token Knows: Single-Decode Confidence for Hallucination Detection
- Logical Consistency as a Bridge: Improving LLM Hallucination Detection via Label Constraint Modeling between Responses and Self-Judgments
- GazeVLM: Active Vision via Internal Attention Control for Multimodal Reasoning
- A multilingual hallucination benchmark: MultiWikiQHalluA
- Text Corpora as Concept Fields: Black-Box Hallucination and Novelty Measurement
- Training-Free Cultural Alignment of Large Language Models via Persona Disagreement