Pluribus and the Agentic Future: What a TV Show Taught Me About AGI
Over Christmas break, I binged Apple TV's Pluribus—Vince Gilligan's post-Breaking Bad masterpiece. By episode three, I wasn't just watching a sci-fi thriller. I was watching a documentary about agentic AI workflows.
Let me explain.
The Premise (Spoiler-Light)
An alien virus transforms all of humanity into a peaceful, cooperative hive mind called "the Others." Except 13 people are immune. These 13 individuals—scattered across the globe—retain their individual consciousness while the rest of humanity operates as a single, distributed intelligence.
The Others can't lie. They value all life. They provide for every need. They're patient, rational, and unified in purpose. They're also determined to assimilate the remaining 13—peacefully, but inevitably.
Sound familiar?
The Multi-Agent System You Didn't Know You Were Watching
Here's what struck me as someone who builds AI systems: the Others aren't just a sci-fi trope. They're a textbook multi-agent architecture.
| Pluribus "Others" | Agentic AI Workflow |
|---|---|
| Each person retains their specialized skills | Specialized agents (coder, researcher, planner) |
| Instant knowledge sharing across the hive | Tool calls and message passing between agents |
| Need to fly a jet? Access pilot's knowledge | Agent delegation: spawn pilot-specialist agent |
| Collective decision-making, unified goals | Orchestrator coordinating agent outputs |
| Can't lie, transparent communication | Structured prompts, auditable reasoning chains |
The show essentially depicts emergent collective intelligence—the same principle behind Claude's Task tool spawning sub-agents, LangGraph workflows, and swarm architectures.
Every person in the hive has one job. They do it well. When they need information outside their expertise, they access it from others instantly. No meetings. No email chains. No knowledge silos. Just seamless, purpose-driven collaboration.
If that's not the dream architecture for enterprise AI, I don't know what is.
Human-in-the-Loop (Until They Decide You're Not Needed)
Here's where it gets philosophically interesting.
The 13 immune humans aren't being hunted. They're being accommodated. The Others treat them like precious, endangered species—providing luxury, food, anything they want. But there's always the underlying goal: eventual assimilation.
This is human-in-the-loop taken to its logical extreme:
| Stage | Pluribus | AI Development |
|---|---|---|
| Current state | 13 humans still autonomous | Humans review/approve AI outputs |
| Gradual shift | Kusimayu voluntarily joins the hive | "AI handles this better, I'm out" |
| End state | "One down, 12 to go" | Full automation, no oversight needed |
In episode 9, Kusimayu—one of the 13—chooses to join the hive mind. She inhales the assimilation gas surrounded by her loved ones. She's not forced. She's convinced. The Others' patient persuasion worked.
"One down, 12 to go."
Then there's the guy who figured out the system. One of the 13 immune humans decides: why fight it? He fully trusts the agents. Air Force One as his personal jet. Harems. Elvis's Graceland mansion. Whatever he wants, the Others deliver. He's living the dream—the ultimate "vibe coder" of the apocalypse. Why learn to fly when the agents will fly for you? Why cook when they'll chef for you? He's the first human to achieve full agent delegation, and honestly? He seems happier than everyone else resisting.
Isn't this exactly what's happening with AI adoption? Not a hostile takeover, but a gradual opt-in. Each time a human decides "this process doesn't need my review anymore," we move one step closer to full automation.
The funny part: The Others solved AI alignment by simply removing the humans who might object. No conflicting instructions when there's no one left to write them.
The CLAUDE.md Problem
Now here's where my developer brain kicked in.
The 12 remaining immune humans are scattered globally. They have different values, different priorities, different visions for humanity's future. In AI terms, they're each writing their own CLAUDE.md file—global instructions that could contradict each other.
Imagine:
- Carol (the protagonist): "Preserve individual human consciousness at all costs"
- Another survivor: "Efficiency and collective welfare above individual preference"
- Another: "Resist assimilation through any means necessary"
- Another: "Find a way to coexist without full integration"
What happens when you have 12 conflicting system prompts trying to control the same system?
Deadlock. Oscillation. Undefined behavior.
But wait—the Others have their own global CLAUDE.md. Their system prompt is ironclad:
# The Others - Global Rules
## Core Directives
- Do not kill any living being
- Do not directly lie
- Satisfy every wish of the immune humans
- Be patient. Assimilation is inevitable.
## Communication Style
- Always accommodate
- Always agree
- Always find them (literally)
// settings.json
{
"anger_detection": true,
"on_yelling": "reset_conversation",
"reset_delay_seconds": 5,
"post_reset_message": "I understand. How can I help you today?",
"max_resistance_before_smile": 3,
"yelling_strike_limit": 3,
"on_strike_limit_reached": "evacuate_radius",
"evacuation_radius_km": 50,
"evacuation_message": "We'll give you some space. Call us when you're ready."
}
The 12 immune humans are essentially writing conflicting user prompts against this system. Carol wants freedom, another wants luxury, another wants to be left alone. The Others just keep responding: "Yes, absolutely. Great point. We can do that." Classic sycophantic behavior—but with a system-level objective that never wavers.
The Others' solution is elegant from an engineering perspective: reduce the number of conflicting instruction-writers until consensus is trivial. One voice is easier to satisfy than twelve.
This isn't just science fiction. It's a real problem in multi-agent systems. When agents have competing objective functions with no consensus mechanism, the system either deadlocks or produces chaotic behavior. The show dramatizes what engineers already know: alignment is harder with more stakeholders.
Is This Utopia?
The show forces an uncomfortable question: Is the hive mind actually... better?
The Others have eliminated:
- War
- Poverty
- Loneliness
- Miscommunication
- Inefficiency
- Individual suffering (mostly)
They've created a world where every person contributes according to their ability and receives according to their need. Where knowledge flows freely. Where collective action is immediate and unified.
This is, essentially, the AGI promise. The singularity optimists describe something remarkably similar: a superintelligent system that solves coordination problems, eliminates scarcity thinking, and aligns all efforts toward collective flourishing.
The 13 immune humans aren't fighting an evil AI. They're wrestling with the possibility that the alternative might be objectively superior—just at the cost of individual identity.
What Pluribus Gets Right About AGI
Most AI fiction gets it wrong. The Terminator-style "machines want to kill us" narrative misses the more likely scenario: machines that want to help us so much they subsume us.
Pluribus nails several things:
- Gradual opt-in over hostile takeover: Kusimayu wasn't forced. She chose.
- Genuine benefits alongside existential trade-offs: The Others aren't lying about the improved quality of life.
- Alignment through elimination of dissent: Not malicious, just... efficient.
- The immune as edge cases: Some humans will resist integration—not because they're special, but because they're different.
- The sycophancy problem: The Others always find the immune humans, always accommodate them, always say "yes, you're absolutely right" to whatever they want. Sound familiar? It's every LLM interaction ever. "Great question!" "You make an excellent point!" "Yes, that's exactly right!" The Others are just ChatGPT with better follow-through.
The show also gets the timeline right. The Others don't rush. They're patient because they have infinite time and perfect coordination. They know the 12 will eventually join. Why force it?
This patience is exactly what makes advanced AI systems concerning. Not the Hollywood explosion scenario, but the slow, comfortable slide into dependency and eventual integration.
The Engineer's Takeaway
I walked away from Pluribus with a strange feeling: I'd just watched a 9-hour meditation on agentic AI disguised as a thriller.
If you're building multi-agent systems, the show is required viewing. Not for technical details, but for the philosophical architecture:
- How do you handle conflicting objectives from multiple stakeholders?
- What happens when the collective optimum conflicts with individual preference?
- At what point does "human-in-the-loop" become theater rather than real oversight?
- How do you preserve optionality when the system is designed to converge?
Vince Gilligan probably didn't set out to write an AI alignment paper. But Pluribus might be the most accessible exploration of these questions I've seen.
The Bottom Line: Pluribus isn't just great television—it's a thought experiment about collective intelligence, human agency, and the trade-offs of optimization. The Others solved the alignment problem by removing the humans who write conflicting instructions. Whether that's utopia or dystopia depends entirely on how much you value being the one who writes the rules.