IntoTAO
Back to Subnets
Gradients

Gradients

SN56

Compete to fine-tune AI models, where the best automated model tuners earn rewards

Upload your data. Pick a model. Click start. Miners compete to fine-tune the best version for your specific use case. The result costs a fraction of what Google Cloud or a machine learning engineer charges. This is AutoML as a competitive marketplace.

// Fine-tuning AI models, by competition.

Price0.00000+4.57% 7d
Holders0
Momentum0.0 / 100Strong
// WHAT_IS_THIS

Gradients is a platform where miners compete to fine-tune AI models for specific tasks. Users upload their training data, select a base model, and the subnet handles everything else: multiple miners independently fine-tune the model using their own strategies, and the best result wins. It's automated machine learning (AutoML) powered by competition.

The simple version: Imagine hiring ten chefs to each cook their best version of a dish using the same ingredients. You taste all ten and pick the winner. Gradients does this for AI model training: miners each try their own approach, and the most accurate model wins.

Centralized equivalent: Think Google Vertex AI AutoML or Amazon SageMaker Autopilot, but instead of one company's algorithm, dozens of miners compete to find the best fine-tuning strategy.

How it works:

  • Miners receive fine-tuning tasks (same base model + training data), then independently configure their optimization strategy: learning rates, batch sizes, data augmentation, training schedules. They submit their fine-tuned models within strict time limits (3-10 hours for text, 1-2 hours for images).
  • Validators test submitted models against secret test datasets that miners never see during training. The main validator (operated by Rayon Labs) coordinates tasks and scoring, with independent auditors verifying fairness by recalculating weights over 7-day windows.
5,014holders|826commits|3social mentions this week
Buy Gradients on TaoSwap
Research snapshot from March 30, 2026. Live metrics are in the sidebar.
// WHY_THIS_MATTERS
  • The problem it solves: Fine-tuning a 70B parameter model costs up to $10,000 on Google Cloud. Hiring an ML engineer costs $100,000+ per year. Most businesses that need custom AI models simply can't afford professional-grade fine-tuning.
  • The opportunity: Every business with proprietary data needs custom AI models. Customer support bots, document classifiers, image recognizers, code assistants. The market for model customization is growing faster than the market for base models.
  • The Bittensor advantage: Competition drives down cost and up quality simultaneously. Instead of one AutoML algorithm's best guess, you get dozens of independent approaches competing. The secret test dataset ensures no one can game the evaluation.
  • Traction signals: 826 commits across 7 contributors. 34 GitHub stars. Operated by Rayon Labs with independent audit mechanisms. 4,988 holders. 103,665 TAO market cap. Active development with 5 commits per week focused on scoring and auditing improvements.

// FULL_ANALYSIS

Category: Model Fine-Tuning | Centralized Competitor: Google Vertex AI AutoML, Amazon SageMaker Autopilot, Hugging Face AutoTrain

Gradients sits in one of the most commercially viable positions in Bittensor. Custom model fine-tuning is a real, growing market with clear pricing: $1,000-$10,000 per job on centralized platforms. If Gradients can deliver comparable quality cheaper, the product-market fit is obvious.

Mechanism:

The platform runs both organic tasks (from real users) and synthetic tasks (to keep miners active during low demand). When a task arrives, it's assigned to a pool of miners. Each miner receives the same starting model and training data but must independently determine the optimal fine-tuning strategy. This is the competitive edge: different miners develop different specializations, and the diversity of approaches increases the probability of finding the best configuration.

Time limits enforce efficiency. Text fine-tuning tasks get 3-10 hours depending on dataset size; image tasks get 1-2 hours. This prevents miners from simply throwing more compute at the problem and rewards clever optimization.

The auditing mechanism is important. While Rayon Labs operates the main validator, independent auditors can download 7 days of task results and recalculate what the weights should be. This provides a check against centralized manipulation.

The codebase is substantial: 826 commits across 7 contributors in a 27MB repository. Development velocity is steady at 5 commits per week, with recent work on scoring recalculation and environment thresholds. Key contributors include Hamza Khan and besimray.

Market position is strong for the category. At 103,665 TAO market cap with 4,988 holders, Gradients has significant community backing. Gini of 0.720 is on the higher side, indicating more concentrated holdings. HHI of 0.067 supports this. Root proportion of 0.169 confirms organic demand dominance.

The 7-day net inflow of 1,870 TAO is healthy. The 30-day price is flat at -2%, suggesting the market is waiting for the next catalyst. Unrealized PnL of 20,675 TAO means most holders are still in profit.


// RISK_FACTORS
Risks assessed as of March 30, 2026. Conditions may have changed.
  • Single validator dependency: Rayon Labs operates the main validator. While auditors provide oversight, the centralized coordination point is a structural risk.
  • Concentrated holdings: Gini of 0.720 is among the highest in our coverage. Large holder exits could create significant price pressure.
  • Competitive landscape: Multiple subnets compete in the model fine-tuning space. Differentiation will depend on execution quality and user experience.
  • Flat price action: -2% over 30 days and -2.3% over 90 days suggest the market needs a growth catalyst.
// LIVE_DATA
Price0.00000 TAO
24h+0.06%
7d+4.57%
30d-1.44%
Market Cap0.00 TAO
Emission0.00%
Liquidity57.7K TAO
Holders0