The most effective model of the human mind isn’t a single processor. It’s a committee. AI researchers are starting to notice.
In my previous articles, I explored why AI struggles to learn efficiently and how emotional value signals might be part of the answer. Today I want to take a detour into psychology — because it turns out therapists figured out something about intelligence that AI researchers are just now discovering.
The mind isn’t one thing. It’s many things negotiating.
The Parts Model
In the 1980s, psychologist Richard Schwartz developed Internal Family Systems (IFS), a therapeutic framework built on a counterintuitive idea: what we experience as a unified “self” is actually a collection of distinct sub-personalities, each with its own goals, fears, and strategies.
These “parts” include:
- Managers: Parts that try to keep us safe by controlling situations, planning ahead, staying vigilant
- Firefighters: Parts that react to emotional emergencies — often through distraction, numbing, or impulsive action
- Exiles: Wounded parts carrying painful memories, kept out of awareness by the others
- The Self: A core awareness that can observe and coordinate all the parts
This isn’t metaphor. IFS has become one of the most evidence-backed therapeutic modalities, precisely because it maps onto how people actually experience their inner lives. We do have conflicting impulses. We do sometimes feel like different people in different contexts. We do have parts of ourselves we’ve pushed away.
The AI Parallel
Here’s what caught my attention: the most advanced AI architectures are converging on something structurally similar.
Modern “agentic” AI systems aren’t single models. They’re collections of specialized components:
- Planner agents that break down complex tasks
- Executor agents that carry out specific actions
- Critic agents that evaluate outputs and catch errors
- Memory systems that maintain context across interactions
- An orchestrator that coordinates all of the above
Sound familiar?
Explore Interactively
Why might the Exiles be just as important for AI? → Multipart Mind: Interactive Exploration
What’s Missing
The IFS model suggests something important that current AI lacks entirely: the wounded parts.
In humans, exiles carry the experiences that shaped us — traumas, early failures, moments of shame or fear. They’re not just baggage. They’re information. They encode hard-won knowledge about what’s dangerous, what matters, what to protect.
Managers and firefighters exist because of the exiles. The protective strategies make sense only in relation to what they’re protecting.
Current AI has no equivalent. There’s no accumulated vulnerability, no experiences that shaped protective responses, no developmental history that gives the system’s behaviors meaning and context.
This might be why AI systems can be so brittle. They have the protective machinery (error handlers, safety filters, fallback behaviors) without the developmental foundation that would make those protections adaptive rather than just reactive.
The Missing Ground
There’s another absence the IFS model reveals: a shared world model.
In humans, all parts — managers, firefighters, exiles — reference the same underlying reality. When your inner critic worries about an upcoming presentation, it’s worried about the same presentation your excited part is anticipating. The Self doesn’t just coordinate; it holds a coherent model of the world that gives the parts common ground.
Richard Sutton — who just won the Turing Award for his foundational work on reinforcement learning — puts the AI gap bluntly: LLMs “have the ability to predict what a person would say. They don’t have the ability to predict what will happen.”
That’s a devastating distinction. Current AI builds models of human text patterns, not models of reality. The planner agent and the executor agent in a multi-agent system aren’t grounded in shared understanding of the world — they’re passing tokens back and forth, each predicting what the other’s output should look like.
Without a world model, there’s no shared reality for the parts to coordinate around. It’s like running a committee meeting where each participant is hallucinating a different room.
The Integration Problem
In IFS therapy, healing happens through “unburdening” — helping exiled parts release the pain they carry while maintaining the wisdom they learned. The goal isn’t to eliminate parts but to help them work together more harmoniously, coordinated by Self.
This is remarkably similar to a core challenge in AI: how do you get multiple specialized agents to coordinate effectively?
Current approaches use explicit orchestration — a master controller that routes tasks and aggregates outputs. But this is crude compared to the fluid, context-sensitive coordination that IFS describes in healthy human functioning.
What would it look like for AI systems to develop genuine integration rather than just orchestration? Systems where the components don’t just pass messages but actually understand each other’s purposes and constraints?
The Honest Caveat
I want to be careful here. I’m not claiming AI needs therapy, or that IFS is a blueprint for artificial general intelligence.
What I am doing is what my interdisciplinary training taught me to do: look for structural similarities across domains that don’t usually talk to each other. My PhD work spanned art, music, theater, philosophy, and computer science — not because I couldn’t pick a lane, but because the interesting problems live at the intersections.
This habit of mind — aligning, comparing, contrasting, bridging — sometimes produces insights that specialists miss. It also sometimes produces false analogies that look deep but aren’t. I genuinely don’t know which this is.
What I am suggesting:
- Psychology has spent decades studying how multiple sub-systems coordinate within a single mind
- AI is now building systems with multiple coordinating components
- The accumulated wisdom about what makes human multi-part systems work (or fail) might be relevant
The IFS insight is that healthy functioning requires more than just having the right parts — it requires the right relationships between parts, developed through experience, mediated by something that can hold the whole.
What This Means for Learning Leaders
1. Multi-agent AI will become standard.
The single-model chatbot is already giving way to orchestrated systems with specialized components. Understanding this architecture helps you evaluate what these systems can and can’t do.
2. Integration matters more than capability.
Just as a person can be brilliant but dysfunctional if their parts aren’t coordinated, AI systems can have impressive components that fail to work together. Look for how systems handle handoffs, conflicts, and edge cases.
3. Development history shapes behavior.
In humans, current functioning makes sense only in light of developmental history. AI systems increasingly have training histories, fine-tuning histories, interaction histories. Understanding these can help predict behaviors and failure modes.
The Deeper Question
Here’s what I keep coming back to: the psychological models that best describe human flourishing — IFS, attachment theory, developmental psychology — all emphasize relationship and history as foundational.
Current AI has neither in any meaningful sense. The components don’t have relationships with each other. The system doesn’t have a history that shaped its current configuration.
Maybe that’s fine for current applications. But if we want AI that genuinely learns and adapts rather than just processes, the therapeutic traditions might have more to teach than the engineering traditions.
The therapists have been thinking about multi-part minds for decades. The AI researchers are just getting started.
A Provocative Implication
If this parallel holds, it raises an uncomfortable question: Would AI systems with genuine developmental histories also be vulnerable to the failure modes that history produces?
In humans, depression can emerge as learned helplessness. Anxiety as over-protective managers running unchecked. Trauma responses that once saved us but no longer serve us. The very mechanisms that make human learning adaptive also make us breakable.
Would an AI with “exiles” also be capable of these failure modes? And if so — here’s the turn — would the therapeutic traditions that help humans integrate their parts also help align AI systems?
IFS doesn’t pathologize parts. It says they’re trying to help with outdated information. Therapy isn’t removing the firefighter who numbs with distraction; it’s updating its world model so it can protect more skillfully.
If that's the mechanism... then AI alignment work and therapeutic work might converge on the same problem: How do you update protective strategies that are no longer adaptive, without losing the wisdom they encoded?
I find it striking that the AI alignment community increasingly talks about “values” and “goals” and “what the system really wants” — language that sounds less like engineering and more like therapy.
This is the most speculative article in this series. I’m genuinely uncertain whether the parallels I’m drawing are deep or superficial. What’s your intuition? Do the multi-agent AI architectures feel fundamentally different from human multi-part psychology, or is there something real here?
Allen Partridge, PhD — Director of Product Evangelism, Adobe Digital Learning Solutions
Companion to Multipart Mind — an interactive exploration of the IFS parallel for AI architectures.