Similar Items: FedPLT: Scalable, Resource-Efficient, and Heterogeneity-Aware Federated Learning via Partial Layer Training
- FedQueue: Queue-Aware Federated Learning for Cross-Facility HPC Training
- FATE: Future-State-Aware Scheduling for Heterogeneous LLM Workflows
- A Scalable Recipe on SuperMUC-NG Phase 2: Efficient Large-Scale Training of Language Models
- From Coordinate Matching to Structural Alignment: Rethinking Prototype Alignment in Heterogeneous Federated Learning
- Cross-Layer Energy Analysis of Multimodal Training on Grace Hopper Superchips
- HexiSeq: Accommodating Long Context Training of LLMs over Heterogeneous Hardware