Hugging Face has officially launched Kernels Hub, a cloud-based solution for pre-compiled GPU operators, as announced by CEO Clem Delangue. This new service aims to simplify the installation of GPU kernels, which are crucial for optimizing graphics card performance. Traditionally, compiling these kernels, such as FlashAttention, required significant resources and time, often leading to errors due to version mismatches. Kernels Hub addresses these challenges by offering pre-compiled kernels for various GPU and system environments, allowing developers to implement them with a single line of code. The service supports multiple hardware acceleration platforms, including NVIDIA CUDA, AMD ROCm, Apple Metal, and Intel XPU, and is integrated into Hugging Face's inference framework TGI and the Transformers library. Initially launched in testing last June, Kernels Hub has now been upgraded to a first-class repository type on the Hugging Face Hub, alongside Models, Datasets, and Spaces. Currently, 61 pre-compiled kernels are available, covering essential use cases such as attention mechanisms and quantization.