Tether has launched the BitNet LoRA framework, enabling AI model training across smartphones, GPUs, and consumer devices. This new system significantly reduces memory usage, with up to 77.8% lower VRAM requirements, allowing users to fine-tune models up to 13 billion parameters on mobile devices. The framework, announced through Tether's QVAC Fabric platform, supports cross-platform AI training, expanding edge AI capabilities. The QVAC Fabric update introduces support for BitNet LoRA fine-tuning across various hardware and operating systems, including GPUs from AMD, Intel, and Apple. By utilizing Vulkan and Metal backends, the framework ensures compatibility across devices. Tether's CEO, Paolo Ardoino, emphasized the reduced costs and broader access to AI tools, highlighting the framework's ability to run billion-parameter models on everyday hardware like smartphones and GPUs. The BitNet LoRA framework combines techniques to lower hardware requirements, enabling faster GPU inference and reduced memory usage. Tether demonstrated the system's capability by fine-tuning 125 million parameter models on smartphones like the Samsung S25 in minutes. This development allows larger models to run on edge devices, reducing reliance on centralized platforms and enabling local data processing.