Meituan has launched its LongCat-2.0-Preview AI model for testing, boasting over one trillion parameters, placing it among the world's leading large models. This development marks a significant milestone as the entire training and inference process was conducted using domestic computing infrastructure. Meituan utilized between 50,000 and 60,000 computing chips, setting a record for the largest model training task on domestic hardware.
On the same day, DeepSeek announced its V4 large model, which reportedly has similar parameter counts to Meituan's LongCat-2.0-Preview. This highlights the competitive landscape in AI model development, with both companies pushing the boundaries of computational capabilities.
Meituan Unveils Trillion-Parameter AI Model LongCat-2.0-Preview for Testing
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
