Gradient has launched Echo-2, a new distributed reinforcement learning framework that significantly enhances AI research efficiency. By decoupling Learners and Actors, Echo-2 reduces the post-training cost of a 30B model from $4,500 to $425, achieving over 10x greater research throughput. The framework utilizes compute-storage separation for asynchronous training and offloads sampling workloads to unstable GPU instances, maintaining model accuracy with innovations like bounded staleness and instance-fault-tolerant scheduling. In conjunction with Echo-2, Gradient is set to release its RLaaS platform, Logits, which aims to shift AI research from capital-intensive to efficiency-driven innovation. Logits is now accepting waitlist sign-ups globally for students and researchers.