A recent study by Li Bojie, Chief Scientist at Pine AI, has estimated the parameter counts of several closed-source large language models using a novel method. The research, published in a paper titled "Incompressible Knowledge Probes," utilized 1,400 obscure factual questions to reverse-engineer the parameter sizes of these models. The study found that GPT-5.5 leads with an estimated 9.7 trillion parameters, significantly ahead of Claude Opus 4.6 at approximately 5.3 trillion. The research also placed Grok-4 at around 3.2 trillion parameters, with other models like GPT-5 and Claude Opus 4.7 closely following. The study's methodology involved mapping the performance of closed-source models onto a curve derived from 89 open-source models with known parameters. This approach provided meaningful parameter estimates, despite potential variances. The findings highlight the substantial parameter growth in newer models, with GPT-5.5 marking a significant leap in capacity.