Gonka Network has achieved a significant milestone, with its total computing power exceeding 12,000 H100 equivalent GPUs, according to gonkascan.com. The network now supports over 9,000 high-memory GPUs, including H100 and H200 models, enhancing its capacity to handle high-concurrency inference tasks for AI models with billions of parameters. This expansion positions Gonka on par with traditional large-scale AI computing centers.
The network's infrastructure supports nearly 40 GPU models, ranging from data center-level GPUs like H200 and A100 to consumer-grade graphics cards such as RTX 4090. This diverse GPU support lowers entry barriers for nodes, allowing broader participation in MLNode operations. Gonka's decentralized approach aims to integrate global GPU computing power efficiently, providing a flexible infrastructure for AI inference and training.
The network, incubated by Product Science Inc., has raised over $69 million from investors including Coatue and Slow Ventures. With daily users of its AI inference models exceeding 3,000, Gonka is moving beyond testing phases towards mainstream AI API production workloads.
Gonka Network Surpasses 12,000 H100 Equivalent GPUs, Expands AI Infrastructure
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
