Similar Items: Silicon Showdown: Performance, Efficiency, and Ecosystem Barriers in Consumer-Grade LLM Inference
- XtraMAC: An Efficient MAC Architecture for Mixed-Precision LLM Inference on FPGA
- TokenStack: A Heterogeneous HBM-PIM Architecture and Runtime for Efficient LLM Inference
- VitaLLM: A Versatile and Tiny Accelerator for Mixed-Precision LLM Inference on Edge Devices
- Efficient, VRAM-Constrained xLM Inference on Clients
- NVLLM: A 3D NAND-Centric Architecture Enabling Edge on-Device LLM Inference
- VitaLLM: A Versatile, Ultra-Compact Ternary LLM Accelerator with Dependency-Aware Scheduling