τemplar
SN3Train large AI models by splitting the work across computers worldwide
The subnet that trained Covenant-72B, a 72-billion parameter language model, using decentralized infrastructure. It is the largest model trained on decentralized compute to date. The subnet is currently in a 100% burn period while the team builds the next phase: Templar Crusades.
// The world's GPU cluster, owned by nobody.
τemplar is a decentralized AI training framework that coordinates GPU owners around the world to train a single AI model collaboratively. Individual nodes contribute compute, and the system's incentive mechanism ensures only quality contributions are integrated into the model.
The simple version: Imagine building a skyscraper where every construction worker brings their own crane. Nobody owns the whole site, but the building still gets built. τemplar does this for AI model training: your GPU contributes a piece of the work, and the finished model belongs to everyone.
Centralized equivalent: Think OpenAI's training infrastructure or Google DeepMind's TPU clusters, but built from individually-owned GPUs coordinated over the internet.
How it works:
- Miners synchronize with the global model state, receive deterministic data subsets for each training window, compute gradients locally, compress them using DCT (Discrete Cosine Transform) with top-k selection, and upload to shared storage. They also gather and aggregate peer gradients to keep their local model current.
- Validators retrieve the same data assigned to each miner, apply that miner's submitted gradient to a model copy, and measure whether the model's loss actually decreased. Miners are scored based on how much their gradients improve the model. Only beneficial updates are integrated.
- The problem it solves: Training large AI models costs hundreds of millions of dollars in GPU compute. Only a handful of companies in the world can afford it. This concentrates the most powerful AI technology in very few hands.
- The opportunity: If decentralized training can match centralized quality, it unlocks AI model development for researchers, startups, and countries that can't afford billion-dollar data centers.
- The Bittensor advantage: Bittensor's incentive system turns idle GPUs worldwide into a coordinated training cluster. Contributors earn rewards proportional to how much their work improves the model, creating a self-sustaining training economy.
- Traction signals: Covenant-72B completed training (72B parameters). 2,319 commits across 23 contributors on the templar repo. 142 GitHub stars, 54 forks. 7,477 holders. 377,606 TAO market cap. Led by distributedstatemachine (Sam Dare), with major contributions from joellidin, epappas, and AlexanderLavelle.
Category: Distributed Training | Centralized Competitor: OpenAI Training Infra, Google DeepMind TPU Clusters, Meta FAIR
τemplar is the flagship narrative for Bittensor's core thesis: that decentralized networks can produce AI capabilities previously locked behind corporate data centers. Covenant-72B demonstrated feasibility. The subnet is now transitioning to its next phase.
Mechanism:
Training operates in synchronized windows coordinated by blockchain blocks. Each window, miners receive deterministic data pages (seeded by UID and window number), compute gradients through forward and backward passes, apply momentum decay and weight decay, then compress gradients using DCT transformation with top-k coefficient selection. This compression reduces communication overhead while preserving the most significant weight updates.
Validators evaluate contributions by computing model loss before and after applying a miner's gradient on the same data. The improvement score feeds into a moving average that determines on-chain weights. This creates a direct, measurable link between contribution quality and rewards.
The codebase is substantial: 2,319 commits across 23 contributors. Key contributors include distributedstatemachine (866 commits), joellidin (850), epappas (250), and AlexanderLavelle (190). Contributors from MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) are among the team. 142 GitHub stars and 54 forks indicate significant community interest.
Current status: The subnet has been in a 100% burn period since January 2026. Miners are not being evaluated and no emissions are being distributed. The Covenant-72B training run is complete, and the team is building Templar Crusades, a new competition system where participants submit training code that gets evaluated on target hardware based on MFU (Model FLOPs Utilization).
Market metrics reflect τemplar's position as a major subnet. At 377,606 TAO market cap with 7,477 holders, it captures 7.8% of total network emissions. Gini of 0.641 shows reasonably distributed ownership. Root proportion of 0.167 confirms organic demand.
Net 7-day outflow of -12,305 TAO is significant and likely reflects repositioning during the burn period when no mining rewards are active.
- 100% burn period: No miners are being evaluated or paid since January 2026. The subnet is between phases, relying on the Crusades transition.
- Significant outflows: -12,305 TAO net 7-day flow is one of the largest outflows in our coverage.
- No active development on templar repo: Last commit was January 21, 2026. Development effort has likely shifted to the Crusades repo and internal work.
- Transition risk: The move from gradient-sharing training to MFU competition (Crusades) is a fundamental mechanism change. Success is not guaranteed.
- Quality gap: The Covenant-72B model demonstrated feasibility, but market expectations will be set by whether the next phase can produce competitive models.