Investing Notes

Public scratchpad of notes, memos, and references.

# Two Paths to AGI — Karpathy’s Architecture vs. Recursive Learning Acceleration

Updated: 2025-10-28T16:04:24Z
#ai-buildout#agi

| **Category** | **Karpathy / Architecture-First Camp** ***(Andrej Karpathy, Demis Hassabis, Yann LeCun, Geoffrey Hinton, Elon Musk, Dario Amodei)*** | **Recursive Learning / Accelerationist Camp** ***(Sam Altman, Ilya Sutskever, John Schulman, Shane Legg, Eliezer Yudkowsky, Nick Bostrom, Emad Mostaque)*** | **How They Can Both Emerge** | |:-:|:-:|:-:|:-:| | **Core Belief** | **AGI requires new architecture** — long-term memory, multimodality, and continual learning must exist before true general intelligence. | **AGI will emerge once systems recursively improve themselves** — optimizing data, architecture, and training beyond human supervision. | Architecture builds the brain; recursion ignites the improvement loop once the brain is stable. | | **Primary Bottleneck** | Missing cognitive components: memory persistence, context distillation (“sleep”), and identity continuity. | Lack of closed feedback loops that allow models to self-train themselves autonomously. | Build cognition first, then ignite recursion. | | **Definition of AGI** | A system that **perceives, remembers, and acts coherently**across time — a digital organism with continuity. | A system that **self-optimizes and evolves**— recursively improving versions of itself. | AGI = continuity + self-improvement. | | **Trigger for Takeoff** | Building “the brain”: hybrid architecture combining working, episodic, and consolidated memory. | Closing “the loop”: models design, evaluate, and train themselves (auto-research). | Architecture enables safe recursion; recursion unlocks exponential growth. | | **Analogy** | **Engineer a brain before consciousness arises.** | **Spark the fire once fuel is ready.** | Sequential: structure → ignition. | | **Technical Focus** | Memory-centric compute (HBM, vector DBs, consolidation loops, multimodal agents). | Recursive training loops, self-rewarded RL, reflective evolutionary models. | Fusion occurs in **meta-learning systems** that both persist and adapt. | | **Time Horizon** | ≈ **10 years** — “The Decade of Agents.” | **2–5 years** once recursive learning stabilizes. | Phase 1 → Architecture; Phase 2 → Recursion. | | **Leading Examples / Companies** | DeepMind (Gemini & SIMA), Anthropic (Memory Claude), xAI (Tesla Dojo), Eureka Labs, Meta FAIR continual learning. | OpenAI (O-series / Q*), Anthropic (Claude Reflexion), Sakana AI (evolutionary loops), Google DeepMind self-play agents, OpenDevin auto-research. | DeepMind & Anthropic bridge both — adding reflection + memory simultaneously. | | **Philosophical Tone** | **Engineering realism:** design cognition like biology. | **Emergent optimism:** let intelligence evolve itself. | Both describe different phases of emergence. | | **Ultimate Outcome** | Stable, interpretable, memory-driven AGI — “digital organism.” | Rapidly self-optimizing AGI — “recursive scientist.” | Convergence yields **sustainable evolving intelligence:** memory + recursion. |

Summary

* Karpathy Camp: “**Build the brain before it can think about itself.**” * Recursive Camp: “**Once it can think about itself, it will rebuild itself.**”

Both can be true — **architecture enables recursion, and recursion accelerates architecture**. **Phase 1:** *Engineering cognition* → **Phase 2:** *Igniting self-improvement.*