By 2035, AI could be everywhere—transforming science, healthcare, education, and governance—without ever becoming fully autonomous. This scenario explores how building AI as tools, not agents, might deliver breakthroughs while keeping humans in control.
Why imagine a Tool AI future at all? It explains why many experts see controllable, narrow AI as both safer and more realistic, and what’s at stake if we don’t pursue it. We also define what we mean by Tool AI.
This timeline shows what it would actually take for a Tool AI world to emerge—early failures, liability laws, market pivots, and institutional choices that made controllable AI the path of least resistance.
If Tool AI did take root, what would change in practice? This section shows how science, healthcare, education, governance, and more could see extraordinary progress—universal flu vaccines, personalized medicine, adaptive classrooms, transparent governance—through advanced tools that accelerate discovery while keeping humans in the loop.
Beyond institutions, how would daily life feel? Here we explore shorter work weeks, healthier lives, new freedoms in identity and community—and the challenges that come with them.
This section outlines the biggest challenges to making a Tool AI world real. Can controllable systems deliver AGI-level benefits without drifting into autonomy? Will human oversight scale, or collapse into rubber-stamping? And can Tool AI survive against market incentives that push toward riskier, more agentic models? These are the fault lines that could keep this future from emerging.
Includes short explainers of key terms, list of scenario contributors, methodological notes on how the scenario was developed, and references for further reading. They also connect this report to the broader AI Pathways project.