Similar Items: KEET: Explaining Performance of GPU Kernels Using LLM Agents
- PipeMax: Enhancing Offline LLM Inference on Commodity GPU Servers
- SAGA: Workflow-Atomic Scheduling for AI Agent Inference on GPU Clusters
- Microbenchmark-Driven Analytical Performance Modeling Across Modern GPU Architectures
- VDCores: Resource Decoupled Programming and Execution for Asynchronous GPU
- Metal-Sci: A Scientific Compute Benchmark for Evolutionary LLM Kernel Search on Apple Silicon
- MERBIT: A GPU-Based SpMV Method for Iterative Workloads