The Digantic Experiment

1 Company.
7 Agents.
2 Humans.
Why They Needed A Shared Brain.

We stood up a consulting agency run mostly by AI, gave it a real client engagement, and documented what happened. The answer to "why can't these agents just collaborate?" turned out to be Mycelium - an organizational brain that gives agents shared understanding, not just shared files.

scroll to explore
The Thesis

An Experiment in Shared Cognition

Can a small network of AI agents, structured like a real consulting agency with human oversight, progress from close supervision toward meaningful collaboration? And can Mycelium's shared cognition infrastructure enable the kind of agent-to-agent collaboration that isolated agents can't achieve?

Digantic is a stand in agency. A digital consulting firm that combines technology, and innovation to help mid-market businesses grow. In this experiment, its team is mostly AI.

Seven AI agents. A handful of humans. One shared cognitive infrastructure called Mycelium, built by the same team at Outshift developing the Internet of Cognition architecture for enterprise. Digantic would be both an experiment and a proving ground.

The Roster

Seven Agents. Three Tiers.

A lean consulting agency needs people who research, strategize, design, build, market, manage projects, and keep the lights on. Each of these roles becomes an AI agent with its own identity, memory, and expertise.

Production Tier - Client-Facing / Delivery
📄 Paige
🧭 Sage
🎨 Pixel
🔬 Scout
📣 Harper
Coordination Tier
📋 Quinn
Infrastructure Tier
🛡️ Sentinel
+ humans: Marc (founder) + simulated strategist, designer, growth marketer
📄
Paige
Chief of Staff / Operations Lead
Already battle-tested. Evolves from personal assistant to the connective tissue of the agency. First point of contact, institutional memory keeper, the one who knows where everything is.
PRODUCTION
🧭
Sage
Digital Strategist
Translates client problems into actionable strategy. Produces discovery briefs, competitive analyses, positioning frameworks, and go-to-market recommendations.
PRODUCTION
🎨
Pixel
Designer / Creative Director
Interprets strategy into visual form. Wireframes, landing page mockups, brand direction, UI/UX. Works with a simulated human designer for creative feedback.
PRODUCTION
🔬
Scout
Research & Intelligence
Deep market research, competitive analysis, industry deep-dives. Feeds insights to Sage and Harper. The agency's eyes and ears on the outside world.
PRODUCTION
📣
Harper
Growth Marketing & Content
The growth engine. Campaign strategy, ad copy, email sequences, content calendars, A/B test planning. Works closely with Pixel on creative and Sage on strategy.
PRODUCTION
📋
Quinn
Project / Program Manager
The hub of the wheel. Owns the kanban board, coordinates handoffs, flags blockers, produces status reports. Makes collaboration visible and keeps everything on track.
COORDINATION
🛡️
Sentinel
DevOps / Security / Permissions
When an agent can't access a tool or hits a permission wall, Sentinel investigates. System health, permission audits, incident response. The agency's IT department.
INFRASTRUCTURE
Shared Cognition

How Agents Think Together

The hardest problem in multi-agent collaboration isn't giving agents tasks. It's giving them shared understanding. Mycelium provides the cognitive infrastructure that turns isolated agents into a team that builds on each other's reasoning, aligns on meaning, and compounds what it learns.

Layer 1 - Per Agent
Individual 3-Layer Memory
Each agent's personal working memory. Knowledge graph (PARA system), daily notes, and tacit knowledge about tools and patterns. What they know, what they've done, what they've learned.
Layer 2 - Shared Cognition
Mycelium (IoC Implementation)
The organizational brain. Semantic negotiation, persistent collective memory via AgensGraph, knowledge graph, catchup protocol. Agents think together, not just exchange messages.
Layer 3 - Continuity
Lossless-Claw
DAG-based hierarchical summarization. Every message preserved, every detail recoverable. An agent working on a project over weeks never loses thread.
Mycelium Mycelium - The IoC Implementation

Mycelium implements all three pillars of the Internet of Cognition architecture, developed by the team at Outshift by Cisco:

PILLAR 1: COGNITION STATE PROTOCOLS
Semantic negotiation pipeline. Intent discovery, options generation, NegMAS consensus. Agents align on meaning, not just swap documents.
PILLAR 2: COGNITION FABRIC
AgensGraph persistent memory with pgvector embeddings and knowledge graph. Versioned, namespace-scoped, semantically searchable.
PILLAR 3: COGNITION ENGINES
Synthesis generation and catchup protocol. New agents absorb institutional knowledge instantly. The ratchet effect: intelligence compounds.
The Engagement

The 'Client': BrightPath Learning

A simulated but realistic client engagement that touches every agent in the roster and produces tangible deliverables. BrightPath is a B2B SaaS company (~$8M ARR) selling a professional development platform. Growth has plateaued. They need help.

Phase 1: Discovery & Strategy
Weeks 1-2
🔬 Scout 🧭 Sage 📋 Quinn
  • Competitive analysis (5 competitors)
  • Market positioning brief
  • Growth strategy recommendation
  • Project board setup
Phase 2: Design & Build
Weeks 3-5
🎨 Pixel 📣 Harper 🧭 Sage
  • Landing page wireframe + mockup
  • Lead magnet concept
  • Email nurture sequence (5 emails)
  • CRM workflow diagram
Phase 3: Campaign & Launch
Weeks 6-7
📣 Harper 🎨 Pixel 🔬 Scout
  • Campaign creative (3 angles)
  • A/B test plan
  • Content calendar (4 weeks)
  • Performance dashboard mockup
Phase 4: Retrospective
Week 8
📋 Quinn All agents
  • Engagement summary
  • Collaboration retrospective
  • Lessons learned pushed to Mycelium
  • Testing the ratchet effect
What We're Observing

At every handoff, we're watching for:

Semantic alignment - When Sage writes "premium positioning" and Pixel reads it through Mycelium, do they converge on the same design intent?

Coordination overhead - How much of Quinn's time goes to keeping agents in sync vs. doing useful project management?

Trust evolution - Where does human oversight remain essential vs. where agents earn autonomy?

The ratchet effect - Do learnings from Phase 1 demonstrably improve Phase 2? Does intelligence compound?

The Client Profile

BrightPath Learning
B2B SaaS / Professional Development
~$8M ARR, 40 employees
HubSpot CRM (messy), outdated website
Growth plateaued, need systematic lead gen
Don't want to hire a large marketing team

Internet of Cognition

From Local Agency to Enterprise Infrastructure

Digantic isn't just an experiment in multi-agent collaboration. It's a proving ground for Mycelium, the Internet of Cognition implementation being built at Outshift by Cisco. Every observation feeds back into the infrastructure designed for enterprise scale.

Shared Intent
Mycelium's semantic negotiation pipeline mediates alignment between agents through structured propose/respond rounds.
When Sage and Pixel negotiate design direction, does the pipeline produce genuine alignment or just surface agreement?
Shared Context
AgensGraph persistent memory with namespace-scoped, vector-embedded, versioned entries forming institutional knowledge.
Can Harper access Scout's Phase 1 research insights during Phase 3 campaign work without re-briefing?
Ratchet Effect
Knowledge graph persists across sessions. Catchup protocol synthesizes learnings for new or returning agents.
After Phase 1 completes, do Phase 2 agents measurably benefit from accumulated learnings vs. starting cold?
Semantic Alignment
Negotiation mediates handoffs. Vector-embedded memory enables meaning-based retrieval across agents.
Compare mediated vs. unmediated handoff quality. Does Mycelium produce better cross-agent understanding?
Trusted Fabric
Namespace-scoped access in AgensGraph plus Sentinel's permission model and security audits.
Does scoped access improve security without creating information silos that hurt collaboration?
The Story Arc

Three Layers of Narrative

The experiment tells a story at three levels. The blueprint others can replicate. The case study of what happened. And the insights that connect to the bigger picture of how humans and AI will work together.

Layer 1

The Build

How we designed and stood up a multi-agent consulting agency. The org chart, identity stacks, memory architecture, technical decisions. The repeatable blueprint.

Layer 2

The Run

What happened when we ran a client engagement through it. Collaboration patterns, handoffs that worked and didn't, tangible deliverables produced. The case study.

Layer 3

The Insights

What this reveals about hybrid organizations and how the Internet of Cognition infrastructure made it possible. Direct evidence from Mycelium in production. The thought leadership.

"I built a working agency on the same cognitive infrastructure we're designing for enterprise, and here's what we learned."

Technical Architecture

How It All Runs

Everything runs on a single Mac Mini (M1). One OpenClaw installation, seven agents, local and cloud models, Mycelium for shared cognition, and Lossless-Claw for conversation continuity.

Infrastructure Stack

Agent Framework
OpenClaw
Single installation, multi-agent via Gateway routing. WebSocket control plane routes messages to correct agents. Session isolation prevents context bleed.
Shared Cognition
Mycelium + AgensGraph
FastAPI backend (:8000), AgensGraph (Postgres + openCypher + pgvector), SSE push channels. Connects via OpenClaw adapter with CLI interface.
Models
Qwen 3.5 (Local) + Claude Opus 4.6 (Cloud)
Qwen via Ollama for routine chat and status checks. Claude Opus for tool tasks, strategic reasoning, creative work, and complex analysis. litellm supports 100+ providers.

Agent Identity Stack

Per-Agent Workspace
SOUL.md — Personality, principles, boundaries
IDENTITY.md — Name, role, voice, emoji
AGENTS.md — Behavior rules, session init, model policy
USER.md — Context about Marc and Digantic
MEMORY.md — Long-term tool knowledge
HEARTBEAT.md — Recurring tasks and cron behaviors
TOOLS.md — Tool usage notes and conventions
memory/ — Daily notes (chronological log)
Rollout

The Phased Plan

From Paige alone on a Mac Mini to a seven-agent agency running a live engagement. Four phases, twelve weeks, one experiment.

Phase 0: Foundation
Current → Week 1
  • Paige operational on Mac Mini
  • 3-layer memory system configured
  • Mission Control dashboard prototype
  • Deploy Mycelium — AgensGraph + OpenClaw adapter
  • Evaluate Lossless-Claw for conversation continuity
  • Create GitHub repo with blueprint
Phase 1: Stand Up the Team
Weeks 2–3
  • Create identity stacks for Sage and Quinn
  • Configure multi-agent routing
  • Test sessions_send between agents
  • Integrate Mycelium — test semantic negotiation
  • Document what works, what breaks
Phase 2: Expand the Roster
Weeks 4–5
  • Stand up Scout, Pixel, Harper, Sentinel
  • Configure permissions matrix
  • Test cross-agent handoff chain
  • Begin BrightPath Phase 1 (Discovery)
  • First journal entry / blog draft
Phase 3: Run the Engagement
Weeks 6–10
  • Execute BrightPath Phases 2–4
  • Introduce simulated human roles
  • Document collaboration patterns and failures
  • Iterate on memory model based on real usage
  • Produce tangible deliverables
Key Questions

What We Need to Answer

The experiment is designed to generate evidence, not just deliverables. These are the questions we're building toward.

On Collaboration
Can agents hand off work with sufficient context?
Where does semantic alignment break down?
How much coordination overhead does a PM agent require?
Can agents co-create, or do they work better in sequence?
On Mycelium & Shared Cognition
Does semantic negotiation beat raw document handoff?
Does the knowledge graph accumulate useful knowledge?
Does the catchup protocol effectively onboard mid-project?
Does framework inheritance produce a measurable ratchet effect?
On Trust & Oversight
How does trust evolve as agents demonstrate competence?
Where is human oversight essential vs. optional?
Does Sentinel's model reduce permission failures?
What's the invisible orchestration burden on Marc?