
Bittensor's Subnet 3, known as Templar, just completed the largest decentralized large language model pre-training run in history. The model is called Covenant-72B, it has 72 billion parameters, and it was trained by over 70 independent contributors using commodity GPUs connected through regular home internet connections. No centralized data center, no corporate whitelist, and no $100 million infrastructure budget. The result scored 67.1 on the MMLU benchmark, putting it in the same performance range as Meta's Llama 2 70B, a model built by one of the best-funded AI labs on the planet.
TAO, Bittensor's native token, responded accordingly. The token surged roughly 90% in March 2026 alone, currently trading around $313 with a market cap near $3.4 billion. Jensen Huang, Nvidia's CEO, called Bittensor's approach "a modern version of folding@home" on the All-In Podcast, and within 48 hours, the AI token sector jumped 40.9% in a single day.
Here is what Covenant-72B actually accomplished, how Bittensor's subnet architecture makes it possible, and what this means for TAO holders going forward.
What Covenant-72B Actually Achieved
Training a 72-billion-parameter language model is expensive. OpenAI, Google, and Anthropic spend tens of millions of dollars on GPU clusters housed in specialized data centers to produce frontier models. Covenant-72B took a fundamentally different path. Instead of renting a centralized compute cluster, Templar's protocol coordinated over 70 miners across the globe, each contributing GPUs through standard internet connections, to collectively process approximately 1.1 trillion tokens.
The technical innovation that made this work is called SparseLoCo. It reduced communication overhead between nodes by 146x using sparsification, 2-bit quantization, and error feedback, meaning participants did not need expensive high-bandwidth data center interconnects to synchronize training progress. A contributor scoring system called Gauntlet evaluated every node's output via loss evaluation and OpenSkill ranking, all recorded on the Bittensor blockchain. Nodes that produced high-quality training contributions earned more TAO. Nodes that underperformed were penalized.
The result is a fully open-source model with weights and checkpoints released under the Apache license. A March 2026 arXiv paper confirmed the 67.1 MMLU zero-shot score, which surpasses LLaMA-2-70B and the LLM360 K2 benchmark. That is not frontier performance, and GPT-4 class models still score significantly higher. But the point is not raw leaderboard position. The point is that a permissionless network of anonymous contributors, coordinating only through economic incentives and protocol rules, produced a model competitive with outputs from billion-dollar corporate labs.
How Bittensor's Subnet Architecture Works
Bittensor is not a single AI model. It is a network of specialized mini-networks called subnets, each dedicated to a specific machine learning task. Think of it as a marketplace where different AI services compete for rewards based on the quality of their output. Subnet 1 handles text prompting, while Subnet 3 (Templar) handles the distributed model training that produced Covenant-72B. Other subnets focus on image generation, sports prediction, cybersecurity, and more.
Each subnet operates with its own miners (who produce AI outputs) and validators (who evaluate the quality of those outputs). The economic engine underneath is the Yuma Consensus mechanism, which distributes TAO rewards proportionally to the value each participant creates. Miners compete to produce the best results. Validators stake TAO to earn the right to score them. Bad work gets penalized and good work gets rewarded proportionally. And all of it runs without a centralized authority deciding who participates or what the standards are.
The network currently supports 128 active subnets, with plans to expand to 256 later in 2026. Subnet tokens, priced via automated market makers backed by staked TAO, act as leveraged bets on specific capabilities within the ecosystem. When Covenant-72B launched, the combined market value of Bittensor's ecosystem tokens hit approximately $1.5 billion, with the Templar subnet token itself jumping 194% in seven days.
Why Jensen Huang's Endorsement Matters
Jensen Huang runs a $3 trillion company that manufactures the GPUs powering virtually every AI model on Earth. When he compares Bittensor to folding@home during a conversation with Chamath Palihapitiya on the All-In Podcast, the crypto market listens. But his actual statement carried more weight than a simple name-drop. Huang said the industry needs "models as a proprietary product, a first class product, as well as models as open source. These two things are not A or B, it's A and B."
That framing validates the thesis that decentralized, open-source AI training is a serious complement to the centralized approach, not an ideological experiment. Nvidia profits when anyone buys GPUs, and Huang explicitly endorsed a future where both centralized and decentralized AI coexist. For TAO holders, the signal is that the CEO of the most important company in the AI supply chain does not see decentralized training as a fringe curiosity. He sees it as part of the production stack.
The market response was immediate. TAO rose 17% within hours of the episode airing, and the broader AI token sector followed.
How TAO Compares to Centralized AI Competitors
The comparison between Bittensor and centralized AI labs like OpenAI, Google DeepMind, and Anthropic is not apples to apples, and that is exactly the point.
|
Metric
|
Centralized Labs
|
Bittensor (Covenant-72B)
|
|
Training cost
|
$50M-$100M+ per frontier model
|
Distributed across contributors, no single entity bears full cost
|
|
Infrastructure
|
Proprietary GPU clusters, data center interconnects
|
Commodity GPUs, standard internet connections
|
|
Access
|
Closed weights, API-only access
|
Open-source weights, Apache license
|
|
MMLU benchmark
|
GPT-4 class: 86+
|
67.1 (zero-shot)
|
|
Governance
|
Corporate board decisions
|
Protocol-level incentives, permissionless participation
|
|
Speed to train
|
Weeks with massive clusters
|
Longer, but improving with each iteration
|
Centralized labs produce better raw performance today. Nobody is running production workloads on Covenant-72B instead of GPT-4. But centralized AI also carries concentration risk that institutional capital is starting to notice. A handful of companies control the most powerful models, the training data, and the access policies. Bittensor offers an alternative where model development is permissionless, weights are public, and no single entity can gate access.
The honest framing is that Bittensor is where Linux was in 1995. The commercial products were better. But the open, distributed model eventually changed the entire industry.
What Is Driving TAO's Price Action
TAO's 90% March rally was not a single-catalyst event. Three factors converged in the same two-week window, each reinforcing the others.
Covenant-72B launch (March 10). The Templar team announced the completion of the largest decentralized LLM training run, and the arXiv paper gave it credibility beyond crypto-native audiences. TAO rose 54.8% in the two weeks following the announcement.
Jensen Huang's endorsement (March 18-19). The All-In Podcast clip went viral in both AI and crypto communities, adding a 17% price spike in 48 hours.
Ecosystem expansion and institutional interest. Grayscale's Bittensor Trust opened to accredited investors in early 2026, and the network announced plans to expand from 128 to 256 subnets. A potential conversion of the Grayscale trust into a spot TAO ETF is being discussed for late 2026, which would open the door to mainstream institutional allocations.
The combined market cap of Bittensor ecosystem tokens hitting $1.5 billion also drew attention from traders who see subnet tokens as leveraged plays on the TAO narrative. Bittensor currently ranks as the third-largest AI crypto by market cap, behind only Chainlink and NEAR in the broader AI/infrastructure category.
But TAO is still roughly 59% below its all-time high of $757.60, which means the rally has room to run if catalysts continue, and also means the token carries significant downside risk if sentiment shifts.
Frequently Asked Questions
What is Bittensor's Covenant-72B model?
Covenant-72B is a 72-billion-parameter large language model trained entirely across Bittensor's decentralized network by over 70 independent contributors using commodity hardware. It scored 67.1 on the MMLU benchmark, competitive with Meta's Llama 2 70B, and all model weights are open-source under the Apache license.
Is Bittensor a good investment in 2026?
TAO has strong narrative momentum with the Covenant-72B milestone, Jensen Huang's endorsement, and Grayscale trust access for institutions. But the token is volatile and still 59% below its all-time high. It functions as a high-beta bet on decentralized AI infrastructure, not a stable store of value, and position sizing should reflect that risk.
How does Bittensor make money for participants?
Miners contribute computational resources to subnets and earn TAO rewards proportional to the quality of their AI outputs. Validators stake TAO to evaluate miners and earn a share of emissions. Subnet owners earn a percentage of all TAO distributed within their subnet. The entire system runs on protocol-level incentives without a central company extracting fees.
Can Bittensor compete with OpenAI and Google?
Not on raw model performance today, because centralized labs have significantly more resources and consistently produce higher-scoring models. Bittensor's advantage is structural. It offers permissionless, open-source AI development with no single point of control. The long-term bet is that decentralized training becomes complementary to centralized approaches, similar to how open-source software eventually became critical infrastructure alongside proprietary products.
Bottom Line
Covenant-72B is the first real proof that decentralized AI training can produce models competitive with well-funded centralized labs. The 67.1 MMLU score is not frontier performance, but it was achieved without a data center, without a corporate budget, and without anyone's permission. That changes the conversation from "can decentralized AI work?" to "how fast does it improve?"
TAO at $313 with a $3.4 billion market cap prices in significant optimism, but the catalyst pipeline is loaded. The expansion to 256 subnets, a potential Grayscale spot ETF conversion, and continued technical improvements to distributed training efficiency all sit on the 2026 roadmap. The risk is straightforward. TAO trades on narrative momentum, and AI crypto narratives can evaporate as quickly as they build. The token is 59% below ATH for a reason, and the next leg depends on Bittensor's ability to close the performance gap with centralized competitors. Watch for the subnet expansion timeline and any Grayscale ETF filing updates as the signals that matter most.
This article is for informational purposes only and does not constitute financial or investment advice. Cryptocurrency trading involves substantial risk. Always conduct your own research before making trading decisions.






