Everything you need to understand how Eigentau works, why it matters, and where it's going.
Eigentau is a routing layer for Bittensor. It sits between you and Bittensor's 129+ specialized subnets, and its job is simple: take your complex question, figure out which subnets can answer different parts of it, query them in parallel, and stitch the results together into one coherent answer.
Think of it like this. Bittensor has subnets that are great at text generation, subnets for data scraping, subnets for prediction, subnets for training models, subnets for storage. Each one is excellent at its specific job. But nobody has built the layer that coordinates them — the thing that says "this question needs a scraper, a predictor, and an inference engine working together."
Eigentau is that coordination layer.
The name: "Eigentau" comes from eigenvector (the fundamental direction in linear algebra) + tau (TAO, Bittensor's native token). It represents the fundamental direction of intelligence — not a single model, but the orchestration of many.
In March 2026, Google DeepMind published a cognitive framework for measuring AGI. They defined general intelligence as 10 distinct faculties: perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem solving, and social cognition.
Here's the interesting part: Bittensor's subnets already cover most of these faculties. There are subnets for inference (reasoning), for scraping data (perception), for real-time processing (attention), for training models (learning), for storage (memory). The pieces exist.
What's missing is the gating network — the system that ties them together. In machine learning, a Mixture of Experts (MoE) model works by routing each input to the right combination of expert sub-networks. Eigentau does the same thing, but the "experts" are Bittensor's subnets, and the "router" learns which combinations produce the best results.
Every query goes through four stages:
The router takes your complex task and breaks it into smaller, atomic sub-tasks. Each sub-task is tagged with the cognitive faculty it requires. For example, "Research the top Bittensor subnets and predict which will grow fastest" becomes:
Each sub-task is matched to the best-performing Bittensor subnet(s) for that faculty. The router uses learned weights — it knows from past experience which subnets deliver the best results for which task types. Multiple subnets can be queried in parallel.
Results from all the subnets come back and are merged into a single, coherent answer. Conflicts are resolved. Redundancies are removed. The synthesis step ensures you get one clear output, not a pile of fragmented responses.
After every query, the router scores the outcome. It tracks which subnet combinations produced the best results and adjusts its routing weights using exponential moving average (EMA) blending — 80% current knowledge, 20% new signal. This prevents wild swings while ensuring continuous improvement.
Eigentau maps every Bittensor subnet to one or more of DeepMind's 10 cognitive faculties. This creates a live picture of how much "general intelligence" the Bittensor network covers.
| Faculty | What it means | Subnet examples |
|---|---|---|
| Perception | Processing information from the environment | Image recognition, data scraping, web crawling |
| Generation | Producing text, images, audio, code | LLM text generation, image synthesis, audio |
| Attention | Real-time focus on relevant information | Low-latency inference, streaming endpoints |
| Learning | Acquiring knowledge through experience | Distributed training, fine-tuning networks |
| Memory | Storing and retrieving information | Decentralized storage, data curation |
| Reasoning | Drawing conclusions through logic | LLM inference, chain-of-thought subnets |
| Metacognition | Evaluating your own thinking | Annotation, quality evaluation, tagging |
| Executive Functions | Planning, coordination, flexibility | Compute orchestration, prediction markets |
| Problem Solving | Multi-step solutions to complex problems | AI agent subnets, autonomous workflows |
| Social Cognition | Understanding social context | Dialogue, conversation, persona subnets |
The cognitive map isn't static. As new subnets launch and existing ones improve, the map updates. Eigentau's dashboard shows a live radar chart of coverage across all 10 faculties.
The learning engine is ported from TensorQ's proven self-learning architecture, adapted for routing instead of trading.
After every routing cycle, the engine:
No human tuning. The learning engine runs autonomously. It discovers which routing patterns work through data, not through someone manually setting weights. After 847 cycles, routing accuracy reached 74.2% — and it keeps improving.
The $TAU token powers Eigentau's routing economy on Base (Ethereum L2).
Trading fees from $TAU are converted to TAO via automated swaps. The TAO funds actual subnet queries on Bittensor. This creates a direct link between token demand and real network usage — every $TAU spent results in actual intelligence being produced on Bittensor.
| Phase | What | Status |
|---|---|---|
| 1 | Website + brand + domain (eigentau.ai) | Complete |
| 2 | Dashboard app (app.eigentau.ai) | Complete |
| 3 | Cognitive map — research all 129 subnets, classify by faculty | Next |
| 4 | Router agent — task decomposition, multi-subnet routing, synthesis | Planned |
| 5 | Learning engine — self-improving routing weights | Planned |
| 6 | Live backend — connect app to real routing data | Planned |
| 7 | $TAU token launch on Base | Planned |
| 8 | MCP server — expose routing as tools for other AI agents | Planned |
Built on Bittensor. Powered by Anthropic Claude. Inspired by DeepMind's AGI framework.