
2025 was hard.
Our technical lead quit in June. Development slowed to a crawl. We're still pre-revenue. We didn't raise the pre-seed round we'd hoped for. There were months where we weren't sure if we'd make it to the end of the year.
And yet, here we are. Still building Harmonica. Still believing that teams and communities deserve better ways to listen to (if not deliberate with) their stakeholders than forms and surveys.
After 12 months of public alpha, we're entering public beta with the launch of Harmonica Pro. It's not the triumphant "we crushed it" announcement you usually see from startups. It's more like: we survived, we learned a lot, and we're clearer than ever about what we're building toward.
Here's the honest story of our year.
The year started with a pilot in Paris. Missions Publiques used Harmonica as part of a hybrid workshop with ~100 students of the École Normale Supérieure (in parallel to the AI Action Summit). They wanted to explore how AI might shape a deliberative process focused on the risks and opportunities of AI from the students' perspectives. It was small, experimental, and exactly the kind of use case we'd dreamed about. Yale's Institution for Social and Policy Studies wrote a blog post about it — our first real external validation that this thing we were building mattered to people beyond our tiny team.
Working on the ground with a real deliberative process revealed gaps in our product we hadn't seen from the outside:
Professional facilitators need more control. We watched manual documentation create bottlenecks in real-time. That's why we added editing of system prompts and started developing AI-powered transcription for voice input.
Cross-pollination matters. During the workshop, some of the richest moments came when groups could react to statements from parallel discussions. We've since built follow-up questions powered by vector search, so participants can build on each other's ideas in async sessions.
Real deliberation spans multiple sessions. Complex processes need shared knowledge and continuity. Our projects feature came directly from seeing how hard it was to maintain context across a multi-day event.
We also participated in the OECD bootcamp in Lisbon. Members of our team helped build 2 out of 3 prototypes that were selected by “challenge owners” from EU governments. Then nothing happened.
We learned something important: passion doesn't pay invoices. The deliberative democracy space has funding: we've seen the grants, the EU programs, the institutional budgets. But between the bureaucracy, the closed networks, and possibly the extra friction of having a Russian-named founder in a space that runs on trust and warm intros, we couldn't convert excitement into contracts. We needed a different path.
Meanwhile, Andrea Gallagher, who has been supporting us throughout all of 2025 and deserves more gratitude than we can express here, represented Harmonica at two high-profile events on the other side of the pond.
First, the workshop in Berkeley organized by Plurality Institute and the Council on Technology & Social Cohesion. Over 70 researchers and technologists gathered to map the landscape of LLM tools for public discourse. That gathering eventually produced the report "Mapping LLM Tools for Public Discourse, Pluralism & Social Cohesion", where Harmonica was mentioned as one of the new tools for consensus building through AI-powered facilitation.
Not gonna lie: being included alongside platforms from Google, academic institutions like MIT and Stanford, and leading civic tech organizations felt surreal for a three-person team running on pure enthusiasm.

Another highlight of the year: Harmonica was used at Edge Esmeralda to generate seed statements for a Polis conversation. Audrey Tang discussed our approach during the event. Moments like this remind us why we're doing this work, seeing Harmonica create value in real deliberative processes with people who've spent decades thinking about augmented deliberation and conversation networks.
Our technical lead left. Suddenly Harry had to step up his vibecoding game. Development slowed dramatically. Features we'd planned for summer got pushed to fall, then to "someday." Startups don't usually talk about this stuff. But it's the reality for most early-stage teams, and pretending otherwise would be dishonest.
Our largest deployment came in July: Scroll's Co-Creation Cycle 3, where we facilitated organizational design for their DAO. Over two weeks, 40+ delegates used Harmonica to explore key domains, prioritize initiatives, and deliberate on optimal org design, combining async AI-facilitated surveys with live workshops.
The results surprised even us. One delegate put it perfectly:
"When I interact with the Public Forum, I always measure my words. But this interface is really cozy. It feels like ChatGPT. I can be more assertive and honest. This is the first time I'm writing with more subjectivity. I don't have to consider the DAO members' susceptibility as an organization and the subsequent political tensions. You can be direct and brief. AI could be a neutral field in which to express opinions."
Spanish-speaking delegates could respond in their native language — not just translation, but genuine accessibility that surfaced perspectives we'd never have captured in English-only surveys. We adjusted prompts mid-session when early responses revealed our initial framing wasn't working. We pre-populated Miro boards with AI-synthesized insights so workshops could focus on decisions, not brainstorming from scratch.
The CCC3 produced real outcomes: clear prioritization across six domains, actionable recommendations for council formation, and a validated framework Scroll is now using for future governance iterations.
We're deeply grateful to Eugene Leventhal, Scroll's head of governance at the time, whose forward-thinking approach made this possible. Not every governance lead would bet on an untested AI facilitation tool for a major deliberative process. Eugene did, and he's continued championing deliberative-first governance in web3, which he presented in his keynote at DevConnect in Argentina.
But Scroll also taught us something important: our customers need facilitation services, not just a tool. Great technology isn't enough if organizations don't know how to design deliberative processes. This realization is shaping our 2026 strategy—we want to partner with expert facilitators and co-create templates that encode their expertise.
We did a Foundation Sprint with Chris Lunney, based on a new methodology from the authors of "design sprint", which helped us refine our positioning and strategy. Sometimes you need to stop building and figure out what you're actually building for.
We spent weeks interviewing our primary ICP: consultants. After around 20 conversations, we realized there was no problem-solution fit. Consultants confirmed the pain of "harvesting deeper insights from many stakeholders" but weren't eager to pay for our solution.
Hence our latest pivot: change management. We are shifting our focus to selling Harmonica to companies, particularly change management leaders and people ops who need to engage employees at scale.
In 2026, we’re aiming to build a community of service providers who can design processes and facilitate workshops for our enterprise customers, and who want to create Harmonica templates for their own use or for our future template store. We're also exploring an affiliate model where consultants sell Harmonica to their clients and we share revenue.
An idea we first discussed with Colleen from AI Objectives Institute (and presented at the Interoperable Deliberative Tools symposium in January) has slowly evolved into a multi-track research project.
Seminars with experts. Metagov helped us organize seminars with amazing experts such as Andy Paice, Lisa Schirch, Jorim from Dembrane, Alice Siu from Stanford's Online Deliberation Platform, and researchers from Google Jigsaw.
Evals and prompt experimentation. We started working with Maria Milosh, a talented data scientist from Chicago who is even more passionate about deliberative democracy than we are. She's been helping us develop rigorous evaluations and experiment with different facilitation approaches.
Cross-pollination research. Maria partnered with Oxford Computational Political Science Group to study how different approaches to synthesizing and sharing insights between participants affect social choice in deliberative processes. We also connected with Asya Gergel through this project, strengthening our agent-building capabilities.
Cooperative AI fellowship. Joseph Low, another contributor to OFL, won a fellowship at the Cooperative AI Foundation (yay!). He'll be researching AI facilitation with mentorship from Michiel Bakker, well-known researcher at DeepMind and co-creator of the Habermas Machine.
We are also happy to extend our collaboration with the Metagov ecosystem by powering an experimental version of Harmonica with Apertus, the Swiss LLM accessible through Public AI inference utility, exploring what it means to run AI-powered sensemaking on public infrastructure.
We're a tiny team, but we're lucky to work with brilliant people:
Jonas — our full-stack developer from New Zealand, who has been responsible for our backend and DevOps
Chris — product designer and facilitator from Brooklyn, helping us refine our GTM strategy and building out facilitation services
Umar — data scientist and engineer who used to work with Harry at Gitcoin, helping us get serious about analytics
Maria — helping with evals and cross-pollination workflows, bringing rigor to our ML pipelines
And of course our advisor Andrea Gallagher, who has believed in Harmonica since before it made any sense to believe in us. Thank you so much.
We're entering 2026 with clear priorities:
Harmonica Pro is live. New signups go to Pro now; we'll migrate everyone in January. Now there's a free tier with limitations, and a Pro tier for teams who need Harmonica for more serious projects. If you're a nonprofit or community organization, do reach out, we're committed to giving you free access until we add BYOM (so you’ll be able to power Harmonica with LLM of your choice, including public AI).
Facilitation partnerships. We're looking for expert facilitators who want to work with us to design and run workshops with our clients and co-create templates. The tool is only as good as the process design behind it.
Pre-seed round. We're preparing to raise in early 2026. If you're an angel investor interested in the future of work, collaborative tech and sensemaking, we'd love to talk.
In 2026, multiplayer AI will eat single-player AI. Greg Isenberg and Fareed Mosavat from a16z have both made this call recently, pointing at the same pattern we've seen before. Google Docs beat Word, Figma beat Sketch, Notion beat Evernote.
A few weeks ago Anthropic introduced their "AI interviewer", proving there's massive demand for conversational research at scale. But their approach is single-player: one person, one AI, one conversation in isolation.
We think the next frontier is AI that facilitates structured conversations across many people, asynchronously, and synthesizes the patterns and tensions that emerge. Not replacing human facilitators, but making great facilitation accessible to every team, at any scale.
2025 was a year of learning what doesn't work. 2026 is about shipping what does.
As Harry put it: "We can't perfect everything. New Year's resolution: ship."
See you on the other side.
Happy holidays from Harry and Artem!
>200 subscribers

2025 was hard.
Our technical lead quit in June. Development slowed to a crawl. We're still pre-revenue. We didn't raise the pre-seed round we'd hoped for. There were months where we weren't sure if we'd make it to the end of the year.
And yet, here we are. Still building Harmonica. Still believing that teams and communities deserve better ways to listen to (if not deliberate with) their stakeholders than forms and surveys.
After 12 months of public alpha, we're entering public beta with the launch of Harmonica Pro. It's not the triumphant "we crushed it" announcement you usually see from startups. It's more like: we survived, we learned a lot, and we're clearer than ever about what we're building toward.
Here's the honest story of our year.
The year started with a pilot in Paris. Missions Publiques used Harmonica as part of a hybrid workshop with ~100 students of the École Normale Supérieure (in parallel to the AI Action Summit). They wanted to explore how AI might shape a deliberative process focused on the risks and opportunities of AI from the students' perspectives. It was small, experimental, and exactly the kind of use case we'd dreamed about. Yale's Institution for Social and Policy Studies wrote a blog post about it — our first real external validation that this thing we were building mattered to people beyond our tiny team.
Working on the ground with a real deliberative process revealed gaps in our product we hadn't seen from the outside:
Professional facilitators need more control. We watched manual documentation create bottlenecks in real-time. That's why we added editing of system prompts and started developing AI-powered transcription for voice input.
Cross-pollination matters. During the workshop, some of the richest moments came when groups could react to statements from parallel discussions. We've since built follow-up questions powered by vector search, so participants can build on each other's ideas in async sessions.
Real deliberation spans multiple sessions. Complex processes need shared knowledge and continuity. Our projects feature came directly from seeing how hard it was to maintain context across a multi-day event.
We also participated in the OECD bootcamp in Lisbon. Members of our team helped build 2 out of 3 prototypes that were selected by “challenge owners” from EU governments. Then nothing happened.
We learned something important: passion doesn't pay invoices. The deliberative democracy space has funding: we've seen the grants, the EU programs, the institutional budgets. But between the bureaucracy, the closed networks, and possibly the extra friction of having a Russian-named founder in a space that runs on trust and warm intros, we couldn't convert excitement into contracts. We needed a different path.
Meanwhile, Andrea Gallagher, who has been supporting us throughout all of 2025 and deserves more gratitude than we can express here, represented Harmonica at two high-profile events on the other side of the pond.
First, the workshop in Berkeley organized by Plurality Institute and the Council on Technology & Social Cohesion. Over 70 researchers and technologists gathered to map the landscape of LLM tools for public discourse. That gathering eventually produced the report "Mapping LLM Tools for Public Discourse, Pluralism & Social Cohesion", where Harmonica was mentioned as one of the new tools for consensus building through AI-powered facilitation.
Not gonna lie: being included alongside platforms from Google, academic institutions like MIT and Stanford, and leading civic tech organizations felt surreal for a three-person team running on pure enthusiasm.

Another highlight of the year: Harmonica was used at Edge Esmeralda to generate seed statements for a Polis conversation. Audrey Tang discussed our approach during the event. Moments like this remind us why we're doing this work, seeing Harmonica create value in real deliberative processes with people who've spent decades thinking about augmented deliberation and conversation networks.
Our technical lead left. Suddenly Harry had to step up his vibecoding game. Development slowed dramatically. Features we'd planned for summer got pushed to fall, then to "someday." Startups don't usually talk about this stuff. But it's the reality for most early-stage teams, and pretending otherwise would be dishonest.
Our largest deployment came in July: Scroll's Co-Creation Cycle 3, where we facilitated organizational design for their DAO. Over two weeks, 40+ delegates used Harmonica to explore key domains, prioritize initiatives, and deliberate on optimal org design, combining async AI-facilitated surveys with live workshops.
The results surprised even us. One delegate put it perfectly:
"When I interact with the Public Forum, I always measure my words. But this interface is really cozy. It feels like ChatGPT. I can be more assertive and honest. This is the first time I'm writing with more subjectivity. I don't have to consider the DAO members' susceptibility as an organization and the subsequent political tensions. You can be direct and brief. AI could be a neutral field in which to express opinions."
Spanish-speaking delegates could respond in their native language — not just translation, but genuine accessibility that surfaced perspectives we'd never have captured in English-only surveys. We adjusted prompts mid-session when early responses revealed our initial framing wasn't working. We pre-populated Miro boards with AI-synthesized insights so workshops could focus on decisions, not brainstorming from scratch.
The CCC3 produced real outcomes: clear prioritization across six domains, actionable recommendations for council formation, and a validated framework Scroll is now using for future governance iterations.
We're deeply grateful to Eugene Leventhal, Scroll's head of governance at the time, whose forward-thinking approach made this possible. Not every governance lead would bet on an untested AI facilitation tool for a major deliberative process. Eugene did, and he's continued championing deliberative-first governance in web3, which he presented in his keynote at DevConnect in Argentina.
But Scroll also taught us something important: our customers need facilitation services, not just a tool. Great technology isn't enough if organizations don't know how to design deliberative processes. This realization is shaping our 2026 strategy—we want to partner with expert facilitators and co-create templates that encode their expertise.
We did a Foundation Sprint with Chris Lunney, based on a new methodology from the authors of "design sprint", which helped us refine our positioning and strategy. Sometimes you need to stop building and figure out what you're actually building for.
We spent weeks interviewing our primary ICP: consultants. After around 20 conversations, we realized there was no problem-solution fit. Consultants confirmed the pain of "harvesting deeper insights from many stakeholders" but weren't eager to pay for our solution.
Hence our latest pivot: change management. We are shifting our focus to selling Harmonica to companies, particularly change management leaders and people ops who need to engage employees at scale.
In 2026, we’re aiming to build a community of service providers who can design processes and facilitate workshops for our enterprise customers, and who want to create Harmonica templates for their own use or for our future template store. We're also exploring an affiliate model where consultants sell Harmonica to their clients and we share revenue.
An idea we first discussed with Colleen from AI Objectives Institute (and presented at the Interoperable Deliberative Tools symposium in January) has slowly evolved into a multi-track research project.
Seminars with experts. Metagov helped us organize seminars with amazing experts such as Andy Paice, Lisa Schirch, Jorim from Dembrane, Alice Siu from Stanford's Online Deliberation Platform, and researchers from Google Jigsaw.
Evals and prompt experimentation. We started working with Maria Milosh, a talented data scientist from Chicago who is even more passionate about deliberative democracy than we are. She's been helping us develop rigorous evaluations and experiment with different facilitation approaches.
Cross-pollination research. Maria partnered with Oxford Computational Political Science Group to study how different approaches to synthesizing and sharing insights between participants affect social choice in deliberative processes. We also connected with Asya Gergel through this project, strengthening our agent-building capabilities.
Cooperative AI fellowship. Joseph Low, another contributor to OFL, won a fellowship at the Cooperative AI Foundation (yay!). He'll be researching AI facilitation with mentorship from Michiel Bakker, well-known researcher at DeepMind and co-creator of the Habermas Machine.
We are also happy to extend our collaboration with the Metagov ecosystem by powering an experimental version of Harmonica with Apertus, the Swiss LLM accessible through Public AI inference utility, exploring what it means to run AI-powered sensemaking on public infrastructure.
We're a tiny team, but we're lucky to work with brilliant people:
Jonas — our full-stack developer from New Zealand, who has been responsible for our backend and DevOps
Chris — product designer and facilitator from Brooklyn, helping us refine our GTM strategy and building out facilitation services
Umar — data scientist and engineer who used to work with Harry at Gitcoin, helping us get serious about analytics
Maria — helping with evals and cross-pollination workflows, bringing rigor to our ML pipelines
And of course our advisor Andrea Gallagher, who has believed in Harmonica since before it made any sense to believe in us. Thank you so much.
We're entering 2026 with clear priorities:
Harmonica Pro is live. New signups go to Pro now; we'll migrate everyone in January. Now there's a free tier with limitations, and a Pro tier for teams who need Harmonica for more serious projects. If you're a nonprofit or community organization, do reach out, we're committed to giving you free access until we add BYOM (so you’ll be able to power Harmonica with LLM of your choice, including public AI).
Facilitation partnerships. We're looking for expert facilitators who want to work with us to design and run workshops with our clients and co-create templates. The tool is only as good as the process design behind it.
Pre-seed round. We're preparing to raise in early 2026. If you're an angel investor interested in the future of work, collaborative tech and sensemaking, we'd love to talk.
In 2026, multiplayer AI will eat single-player AI. Greg Isenberg and Fareed Mosavat from a16z have both made this call recently, pointing at the same pattern we've seen before. Google Docs beat Word, Figma beat Sketch, Notion beat Evernote.
A few weeks ago Anthropic introduced their "AI interviewer", proving there's massive demand for conversational research at scale. But their approach is single-player: one person, one AI, one conversation in isolation.
We think the next frontier is AI that facilitates structured conversations across many people, asynchronously, and synthesizes the patterns and tensions that emerge. Not replacing human facilitators, but making great facilitation accessible to every team, at any scale.
2025 was a year of learning what doesn't work. 2026 is about shipping what does.
As Harry put it: "We can't perfect everything. New Year's resolution: ship."
See you on the other side.
Happy holidays from Harry and Artem!
Share Dialog
Share Dialog
Artem
Artem
1 comment
Harmonica's 2025 in Review