IntoTAO
Back to Subnets
Quasar

Quasar

SN24

Tests how well AI models handle extremely long documents and conversations

The subnet that stress-tests AI's ability to understand extremely long documents. Miners run language models that must process up to 2 million tokens of context, answering questions about documents that would take a human weeks to read. The best long-context models earn emissions.

// Crushing the context barrier.

Price0.00000-11.81% 7d
Holders0
Momentum0.0 / 100Strong
// WHAT_IS_THIS

Quasar is an evaluation network for long-context AI models. Miners host language models that can read and understand massive documents, from legal contracts to research papers to entire codebases. Validators test them with standardized benchmarks, scoring accuracy on real-world tasks like question answering, summarization, and multi-document analysis.

The simple version: Imagine giving an AI a 500-page book and asking it specific questions about chapter 47. Most AI models struggle because their "memory" is too short. Quasar rewards the models that can actually hold and understand the entire book at once.

Centralized equivalent: Think Google's Gemini with its 1M+ token context window, or Anthropic's Claude with 200K tokens, but evaluated through continuous competitive benchmarking rather than one-off announcements.

How it works:

  • Miners host long-context capable language models and serve inference. They answer validator benchmark prompts using provided contexts of up to 2 million tokens, competing on accuracy and inference performance.
  • Validators sample tasks from LongBench (NarrativeQA, Qasper, GovReport, and more), send context + questions to miners, and score responses using dataset-specific metrics (F1, exact match, ROUGE). Context-length multipliers reward models that perform well on harder, longer inputs.
3,421holders|304commits|8social mentions this week
Buy Quasar on TaoSwap
Research snapshot from March 29, 2026. Live metrics are in the sidebar.
// WHY_THIS_MATTERS
  • The problem it solves: Long-context capability is hard to measure consistently. Companies make big claims about context windows but real-world performance degrades significantly as context grows. There's no continuous, incentive-aligned benchmark for this.
  • The opportunity: Every enterprise use case involving large documents (legal review, research analysis, codebase understanding, compliance) needs reliable long-context AI. This is a direct enterprise pain point.
  • The Bittensor advantage: Continuous competitive evaluation means models are constantly tested against each other on real benchmarks. Unlike static leaderboards that get gamed, Quasar's incentive structure rewards consistent real-world performance.
  • Traction signals: 304 commits across 3 contributors. 358% 90-day price growth. 191% 30-day growth. 3,140 holders. Active development with weight commit interval features and TPS validation. Roadmap includes multi-modal evaluation and a developer API.

// FULL_ANALYSIS

Category: LLM Evaluation / Long-Context Inference | Centralized Competitor: Google Gemini (1M context), Anthropic Claude (200K), Magic.dev (100M context claim)

Quasar targets one of the most commercially relevant capabilities in modern AI: the ability to process and reason over long documents. While most subnets compete on general inference speed or training quality, Quasar focuses on a specific, measurable axis where there's genuine product demand.

Mechanism:

Validators draw from the LongBench benchmark suite, which includes real-world tasks: narrative question answering, scientific paper comprehension, government report summarization, and multi-document analysis. Each task comes with a context (up to 2M tokens) and a question. Miners must process the full context and return an accurate answer.

Scoring uses standard NLP metrics (F1, exact match, ROUGE) with context-length multipliers that make longer contexts worth more points. This creates a natural arms race: miners who can handle longer contexts at higher accuracy earn disproportionately more.

The development team operates under SILX Labs, with VantaScript as the primary contributor. The codebase has 304 commits, and recent work focuses on weight commit intervals and TPS (tokens per second) validation, suggesting the subnet is maturing its incentive mechanisms beyond pure accuracy into performance efficiency.

Market metrics tell a story of strong momentum. At 72,446 TAO market cap, it's mid-sized. The 90-day price increase of 358% is one of the strongest in the ecosystem. Root proportion of 0.171 means demand is organic. Gini of 0.603 and HHI of 0.022 indicate excellent holder distribution across 3,140 accounts. Net 7-day flow is slightly negative at -89 TAO, suggesting stabilization after the massive run-up.

The emission acceleration ratio of 1.51x (7.25% chain buys vs 4.80% EMA) shows continued accumulation pressure despite the recent gains. Unrealized PnL of 142,721 TAO across holders confirms most are sitting on significant profits.

The roadmap is ambitious: multi-modal long-context evaluation (text + images) in Q2 2026, custom benchmark submissions, a developer API, and cross-subnet collaboration. If executed, this transforms Quasar from a benchmark subnet into an evaluation-as-a-service platform.


// RISK_FACTORS
Risks assessed as of March 29, 2026. Conditions may have changed.
  • Small development team: 3 contributors with one (VantaScript) doing most work. Bus factor of 1 is concerning.
  • Benchmark dependency: LongBench is the primary evaluation suite. If miners overfit to these specific tasks, the evaluations lose meaning.
  • Zero active miners on TaoSwap: Similar to τemplar, this metric shows 0, which may indicate measurement issues or network transitions.
  • Concentration of gains: 358% in 90 days means many recent buyers are deep in profit. A sentiment shift could trigger cascading sells.
// LIVE_DATA
Price0.00000 TAO
24h-8.45%
7d-11.81%
30d+155.43%
Market Cap0.00 TAO
Emission0.00%
Liquidity15.4K TAO
Holders0