Text this: The Illusion of Power Capping in LLM Decode: A Phase-Aware Energy Characterisation Across Attention Architectures