Similar Items: Why Expert Alignment Is Hard: Evidence from Subjective Evaluation
- EMO: Pretraining Mixture of Experts for Emergent Modularity
- PairAlign: A Framework for Sequence Tokenization via Self-Alignment with Applications to Audio Tokenization
- Training-Free Cultural Alignment of Large Language Models via Persona Disagreement
- Why Low-Resource NLP Needs More Than Cross-Lingual Transfer: Lessons Learned from Luxembourgish
- Why Geometric Continuity Emerges in Deep Neural Networks: Residual Connections and Rotational Symmetry Breaking
- Foundation Models to Unlock Real-World Evidence from Nationwide Medical Claims