
The enterprise software world is converging on a thesis: as AI agents take over workflows, the most valuable layer isn't the database underneath or the application on top, it's the context layer in the middle.
Evan Armstrong calls it the institutional knowledge that makes coordination valuable: "the email threads, wiki pages, Slack channels, onboarding docs, and tribal knowledge where organizational truth actually lived." Foundation Capital goes further, arguing that the real prize is decision traces: not just what a company knows, but why specific decisions were made. The exceptions, overrides, precedents, and cross-system reasoning that currently live in people's heads.
Both are right. And both are missing a piece.
Armstrong captures it perfectly: "A markdown file can describe your sales process. It can't encode that deals over $500K stall when legal reviews before procurement." Foundation Capital adds: "The reasoning connecting data to action was never treated as data in the first place."
So here's the question neither article asks: if the most valuable context lives in people's heads: in contradictions they haven't articulated, in trade-offs they resolve differently, in assumptions they don't know they're making. How do you get it out?
The context layer thesis is compelling. But it raises a question about how the context layer actually gets built. Armstrong's vision compounds through workflow traces: every agent execution adds context that makes the next one smarter. Foundation Capital's decision traces accumulate as agents observe cross-system reasoning. Both assume the hardest organizational knowledge will eventually surface through observation and accumulation.
But will it? The hardest, most valuable organizational context is pre-articulation — it exists as tacit knowledge that people carry but have never been asked to express. John Cutler captured this well: "Documents don't create meaning, people do. Process artifacts gain significance from the discussions, decisions, and behaviors surrounding them." His principle that "words, diagrams, and models only become meaningful through conversation and collective sense-making, not in isolation on a page" points at something the context layer conversation keeps missing: you can't just capture organizational meaning. You have to produce it, through structured interaction between the people who hold it.
This is what facilitation does. Not facilitation as a soft skill or a meeting technique, but as systematic infrastructure for extracting, structuring, and synthesizing human knowledge, especially when humans need to stay in the loop.
Consider what a structured facilitated session produces: each participant has a private 1-on-1 conversation with an AI facilitator that asks targeted questions, probes for specifics, and follows up on vague answers. The facilitator elicits reasoning, not "mere opinions". Why do you think that? What would change your mind? Where does your experience differ from what others might say?
This matters because, as Cutler puts it, "there is no single source of truth — only perspectives." Agreement on terms is not agreement on meaning, people can use the same words while holding different interpretations. A context layer built from workflow traces will never surface this. You need structured dialogue to discover that when the sales team says "qualified lead" and the product team says "qualified lead," they mean fundamentally different things.
Cross-pollination takes it further: as participants contribute, emerging themes and contrasting perspectives are shared across threads, prompting people to engage with viewpoints they wouldn't have encountered otherwise. The output is a synthesis of how a group actually thinks about a problem: where they agree, where they diverge, and why.
That synthesis is a decision trace. It captures the reasoning that Foundation Capital says was "never treated as data." And unlike the traces their portfolio companies capture from agent workflows, facilitation generates traces from the humans who carry the context that agents can't observe.
We've been working on making facilitation accessible to AI agents, and discovered a surprisingly good alternative to building everything in a web app.
Harmonica is a structured conversation platform: you create a session with a topic and goal, share a link, and each participant has a private 1-on-1 conversation with an AI facilitator. Responses are then synthesized into actionable insights. Sessions range from team retrospectives and brainstorming to stakeholder interviews and community consultations, any scenario where you need to hear from multiple people and make sense of what they said. Now it has not only a web UI, but also a REST API and an MCP server that lets AI agents create and manage sessions programmatically.
The MCP unlocks a completely new user experience. We have just shipped harmonica-chat, a companion for Claude Code and similar agents that makes use of MCP to design sessions, and realized that an AI coding agent is a surprisingly effective facilitation client. It already knows your project. It has your codebase, your recent commits, your CLAUDE.md, your conversation history. When you say "let's run a retro on the API redesign," it doesn't need you to fill out a form, it reads your project context and generates a session pre-loaded with relevant background.

This allows for a surprisingly smooth process design, more powerful than our standard "session creation flow" in a specific way: the context is already there. A user of CLI agent who needs to facilitate something — a team retro, a design review, stakeholder alignment — can do it without leaving their workflow. The agent handles template matching, goal refinement, context generation, and creates the session with a tailored facilitation prompt that understands exactly what's being discussed.
That last part matters more than it sounds. A facilitation session where the facilitator doesn't understand what's being discussed is exactly the gap Armstrong and Foundation Capital describe, playing out in miniature. Context-aware prompts are the difference between generic questions and ones that probe the specific trade-offs, history, and tensions a group actually needs to work through. For example, a session about “planning a local vibecoding meetup” now gets prompts that ask about neighborhood needs and community connections, not generic facilitation questions. The agent understands what it's facilitating.
But the MCP server unlocks more than just a better creation flow. Because it gives AI agents full programmatic access to sessions — creating them, reading responses, fetching summaries — it becomes a building block for experimentation. Maria Milosh, a researcher collaborating with us through the Open Facilitation Library, used our MCP server to implement a novel cross-pollination approach: a two-phase workflow where the first phase gathers ideas and extracts structured reasoning, and the second phase presents curated "packets" of others' perspectives back to participants for reflective dialogue. The MCP integration made it possible to orchestrate this entirely from an agent, no platform changes required.
We think facilitation is infrastructure that should be as open and accessible as the context layer tools everyone is building. harmonica-chat and harmonica-mcp are both open source. Anyone can create and manage structured facilitated sessions or build new facilitation methods on top.
The research foundations matter too. Through our partnership with Metagov, we're contributing to the Open Facilitation Library — a research project working on open standards for AI-assisted facilitation. The library will include facilitation patterns, evals, and agent skills that inform how prosocial "deliberation agents" or collective response system could work. Our goal is to make AI facilitation a shared discipline with transparent, peer-reviewed methods, not a proprietary black box.
Armstrong's context layer compounds: every workflow execution adds traces that make the next one smarter. Facilitation compounds differently. Every session doesn't just produce a record of what people think, it changes how they think. Participants encounter perspectives they wouldn't have otherwise, articulate reasoning they hadn't formalized, discover disagreements they didn't know existed. Follow-up sessions build on previous findings. Over time, an organization develops a shared capacity for sense-making.
The context layer thesis is right: the most valuable software layer is the one that holds institutional meaning. But meaning doesn't accumulate passively. As Cutler warns, we should "focus on cultivating the conversations you want to see, not just building systems that track activity." The context layer needs more than observation and workflow traces. It needs facilitation infrastructure: for the messy, irreducible process of people making sense of things together. That's the piece we're building.

The enterprise software world is converging on a thesis: as AI agents take over workflows, the most valuable layer isn't the database underneath or the application on top, it's the context layer in the middle.
Evan Armstrong calls it the institutional knowledge that makes coordination valuable: "the email threads, wiki pages, Slack channels, onboarding docs, and tribal knowledge where organizational truth actually lived." Foundation Capital goes further, arguing that the real prize is decision traces: not just what a company knows, but why specific decisions were made. The exceptions, overrides, precedents, and cross-system reasoning that currently live in people's heads.
Both are right. And both are missing a piece.
Armstrong captures it perfectly: "A markdown file can describe your sales process. It can't encode that deals over $500K stall when legal reviews before procurement." Foundation Capital adds: "The reasoning connecting data to action was never treated as data in the first place."
So here's the question neither article asks: if the most valuable context lives in people's heads: in contradictions they haven't articulated, in trade-offs they resolve differently, in assumptions they don't know they're making. How do you get it out?
The context layer thesis is compelling. But it raises a question about how the context layer actually gets built. Armstrong's vision compounds through workflow traces: every agent execution adds context that makes the next one smarter. Foundation Capital's decision traces accumulate as agents observe cross-system reasoning. Both assume the hardest organizational knowledge will eventually surface through observation and accumulation.
But will it? The hardest, most valuable organizational context is pre-articulation — it exists as tacit knowledge that people carry but have never been asked to express. John Cutler captured this well: "Documents don't create meaning, people do. Process artifacts gain significance from the discussions, decisions, and behaviors surrounding them." His principle that "words, diagrams, and models only become meaningful through conversation and collective sense-making, not in isolation on a page" points at something the context layer conversation keeps missing: you can't just capture organizational meaning. You have to produce it, through structured interaction between the people who hold it.
This is what facilitation does. Not facilitation as a soft skill or a meeting technique, but as systematic infrastructure for extracting, structuring, and synthesizing human knowledge, especially when humans need to stay in the loop.
Consider what a structured facilitated session produces: each participant has a private 1-on-1 conversation with an AI facilitator that asks targeted questions, probes for specifics, and follows up on vague answers. The facilitator elicits reasoning, not "mere opinions". Why do you think that? What would change your mind? Where does your experience differ from what others might say?
This matters because, as Cutler puts it, "there is no single source of truth — only perspectives." Agreement on terms is not agreement on meaning, people can use the same words while holding different interpretations. A context layer built from workflow traces will never surface this. You need structured dialogue to discover that when the sales team says "qualified lead" and the product team says "qualified lead," they mean fundamentally different things.
Cross-pollination takes it further: as participants contribute, emerging themes and contrasting perspectives are shared across threads, prompting people to engage with viewpoints they wouldn't have encountered otherwise. The output is a synthesis of how a group actually thinks about a problem: where they agree, where they diverge, and why.
That synthesis is a decision trace. It captures the reasoning that Foundation Capital says was "never treated as data." And unlike the traces their portfolio companies capture from agent workflows, facilitation generates traces from the humans who carry the context that agents can't observe.
We've been working on making facilitation accessible to AI agents, and discovered a surprisingly good alternative to building everything in a web app.
Harmonica is a structured conversation platform: you create a session with a topic and goal, share a link, and each participant has a private 1-on-1 conversation with an AI facilitator. Responses are then synthesized into actionable insights. Sessions range from team retrospectives and brainstorming to stakeholder interviews and community consultations, any scenario where you need to hear from multiple people and make sense of what they said. Now it has not only a web UI, but also a REST API and an MCP server that lets AI agents create and manage sessions programmatically.
The MCP unlocks a completely new user experience. We have just shipped harmonica-chat, a companion for Claude Code and similar agents that makes use of MCP to design sessions, and realized that an AI coding agent is a surprisingly effective facilitation client. It already knows your project. It has your codebase, your recent commits, your CLAUDE.md, your conversation history. When you say "let's run a retro on the API redesign," it doesn't need you to fill out a form, it reads your project context and generates a session pre-loaded with relevant background.

This allows for a surprisingly smooth process design, more powerful than our standard "session creation flow" in a specific way: the context is already there. A user of CLI agent who needs to facilitate something — a team retro, a design review, stakeholder alignment — can do it without leaving their workflow. The agent handles template matching, goal refinement, context generation, and creates the session with a tailored facilitation prompt that understands exactly what's being discussed.
That last part matters more than it sounds. A facilitation session where the facilitator doesn't understand what's being discussed is exactly the gap Armstrong and Foundation Capital describe, playing out in miniature. Context-aware prompts are the difference between generic questions and ones that probe the specific trade-offs, history, and tensions a group actually needs to work through. For example, a session about “planning a local vibecoding meetup” now gets prompts that ask about neighborhood needs and community connections, not generic facilitation questions. The agent understands what it's facilitating.
But the MCP server unlocks more than just a better creation flow. Because it gives AI agents full programmatic access to sessions — creating them, reading responses, fetching summaries — it becomes a building block for experimentation. Maria Milosh, a researcher collaborating with us through the Open Facilitation Library, used our MCP server to implement a novel cross-pollination approach: a two-phase workflow where the first phase gathers ideas and extracts structured reasoning, and the second phase presents curated "packets" of others' perspectives back to participants for reflective dialogue. The MCP integration made it possible to orchestrate this entirely from an agent, no platform changes required.
We think facilitation is infrastructure that should be as open and accessible as the context layer tools everyone is building. harmonica-chat and harmonica-mcp are both open source. Anyone can create and manage structured facilitated sessions or build new facilitation methods on top.
The research foundations matter too. Through our partnership with Metagov, we're contributing to the Open Facilitation Library — a research project working on open standards for AI-assisted facilitation. The library will include facilitation patterns, evals, and agent skills that inform how prosocial "deliberation agents" or collective response system could work. Our goal is to make AI facilitation a shared discipline with transparent, peer-reviewed methods, not a proprietary black box.
Armstrong's context layer compounds: every workflow execution adds traces that make the next one smarter. Facilitation compounds differently. Every session doesn't just produce a record of what people think, it changes how they think. Participants encounter perspectives they wouldn't have otherwise, articulate reasoning they hadn't formalized, discover disagreements they didn't know existed. Follow-up sessions build on previous findings. Over time, an organization develops a shared capacity for sense-making.
The context layer thesis is right: the most valuable software layer is the one that holds institutional meaning. But meaning doesn't accumulate passively. As Cutler warns, we should "focus on cultivating the conversations you want to see, not just building systems that track activity." The context layer needs more than observation and workflow traces. It needs facilitation infrastructure: for the messy, irreducible process of people making sense of things together. That's the piece we're building.

Harmonica Is in Early Access Now
You can start playing with your team 🪗

Before the Proposal
Why collective sense-making is the most underserved need in governance (and how to fix it)

Interview with Mel.eth
We sit down with Mel, a governance designer whose involvement in Index Coop established him as one of the most skilled facilitators in web3.

Harmonica Is in Early Access Now
You can start playing with your team 🪗

Before the Proposal
Why collective sense-making is the most underserved need in governance (and how to fix it)

Interview with Mel.eth
We sit down with Mel, a governance designer whose involvement in Index Coop established him as one of the most skilled facilitators in web3.
>200 subscribers
>200 subscribers
Share Dialog
Share Dialog
No comments yet