A recent study by Li Bojie, Chief Scientist at Pine AI, has estimated the parameter counts of several closed-source large language models using a novel method. The research, published in a paper titled "Incompressible Knowledge Probes," utilized 1,400 obscure factual questions to reverse-engineer the parameter sizes of these models. The study found that GPT-5.5 leads with an estimated 9.7 trillion parameters, significantly ahead of Claude Opus 4.6 at approximately 5.3 trillion.
The research also placed Grok-4 at around 3.2 trillion parameters, with other models like GPT-5 and Claude Opus 4.7 closely following. The study's methodology involved mapping the performance of closed-source models onto a curve derived from 89 open-source models with known parameters. This approach provided meaningful parameter estimates, despite potential variances. The findings highlight the substantial parameter growth in newer models, with GPT-5.5 marking a significant leap in capacity.
New Study Estimates GPT-5.5 at 9.7 Trillion Parameters, Grok-4 at 3.2 Trillion
Disclaimer: The content provided on Phemex News is for informational purposes only. We do not guarantee the quality, accuracy, or completeness of the information sourced from third-party articles. The content on this page does not constitute financial or investment advice. We strongly encourage you to conduct you own research and consult with a qualified financial advisor before making any investment decisions.
