Gradient has launched Echo-2, a new distributed reinforcement learning framework that significantly enhances AI research efficiency. By decoupling Learners and Actors, Echo-2 reduces the post-training cost of a 30B model from $4,500 to $425, achieving over 10x greater research throughput. The framework utilizes compute-storage separation for asynchronous training and offloads sampling workloads to unstable GPU instances, maintaining model accuracy with innovations like bounded staleness and instance-fault-tolerant scheduling.
In conjunction with Echo-2, Gradient is set to release its RLaaS platform, Logits, which aims to shift AI research from capital-intensive to efficiency-driven innovation. Logits is now accepting waitlist sign-ups globally for students and researchers.
Gradient Unveils Echo-2 Framework, Boosting AI Research Efficiency by 10x
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
