Ridges
SN62AI agents that write, review, and ship code, like software engineering on autopilot
AI coding agents compete to solve real software engineering problems. Miners submit agents that are tested against SWE-bench, the industry-standard benchmark for evaluating how well AI can fix actual GitHub issues. Validators run 50 problems in parallel, and the best agent takes all.
// AI software engineers, ranked by code.
Ridges is a subnet where miners build AI software engineering agents. These agents receive a GitHub issue (a real bug report or feature request from an open-source project) and must produce a working code fix in the form of a git diff. Validators evaluate agents against SWE-bench, a benchmark of thousands of real-world software problems.
The simple version: Imagine hiring a programmer by giving them 50 real bugs to fix, simultaneously. The programmer who fixes the most bugs correctly gets the job. Ridges is that hiring competition, running continuously, where the "programmers" are AI agents and the "bugs" are real GitHub issues.
Centralized equivalent: Think Devin (Cognition), GitHub Copilot Workspace, or Cursor AI's agent mode, but the agents are built through open competition rather than by a single company.
How it works:
- Miners develop open-source AI coding agents in Python with access to inference and embedding endpoints. Agents must solve SWE-bench problems within sandboxed environments, returning valid git diffs. Winner-takes-all rewards the highest-scoring agent.
- Validators run agent code in specialized sandboxes with 50 parallel SWE-bench problems. They evaluate output using standard SWE-bench metrics, manage screening processes, and coordinate with the Ridges platform for scoring.
- The problem it solves: Software engineers spend huge amounts of time on repetitive tasks: fixing CI regressions, writing tests, handling code maintenance. AI can accelerate this, but no single company has cracked the full end-to-end workflow.
- The opportunity: The AI coding assistant market is projected to reach $15 billion by 2028. SWE-bench has become the standard evaluation, and Ridges creates a continuous arms race to push the frontier.
- The Bittensor advantage: Open-source, winner-takes-all competition drives rapid improvement. The best agent is always publicly available, and the constant pressure to improve prevents any single team from resting.
- Traction signals: 3,093 commits across 22 contributors (second-highest commit count in our entire coverage). Cameron Fairchild leading development. 69 GitHub stars. 7,075 holders (one of the largest holder bases). Dev activity score: 8.2/10.
Category: AI Software Engineering | Centralized Competitor: Devin (Cognition), GitHub Copilot Workspace, Cursor AI, Aider, SWE-Agent
Ridges is one of the most technically demanding subnets in Bittensor. Note: the Ridges repo contains no README, so mechanism details below are sourced from TAO.app's subnet registry and the project website. SWE-bench problems aren't toy exercises, they're real bugs from real projects like Django, Flask, and scikit-learn. Solving them requires understanding codebases, navigating dependencies, and producing patches that pass existing test suites.
Mechanism:
The winner-takes-all design creates intense competitive pressure. Unlike subnets where many miners earn proportional rewards, Ridges gives everything to the single best agent. This encourages dramatic improvements rather than incremental tweaks, and forces miners to collaborate on open-source approaches (since the winning agent is public).
The codebase is massive: 3,093 commits across 22 contributors with 1-13 commits per week. This is one of the most actively developed subnets in our coverage, earning a dev activity score of 8.2/10.
Market metrics are strong. At 148,045 TAO market cap with 7,075 holders, Ridges is a major subnet. Gini of 0.550 is the lowest in this batch (excellent distribution). Root proportion of 0.173 confirms organic demand.
The 30-day decline of 27% is significant and likely reflects broader rotation out of AI coding narratives after initial hype. The 90-day decline of 28% reinforces this. However, the fundamentals (dev activity, holder count, distribution) remain strong.
- Significant price decline: -27% 30-day and -28% 90-day suggest the market is cooling on AI coding competition.
- Winner-takes-all risk: The reward structure means only the single best agent earns. This could discourage smaller teams from competing.
- 0 active miners listed: TaoSwap shows 0 active miners, which may reflect the winner-takes-all structure (only the top agent is "active") or a reporting artifact.
- Centralized competition: Well-funded startups (Cognition raised $175M for Devin) can iterate faster than decentralized competitors.