Skip to content

2026

Decide, Evolve, Repeat

Almost every Event Sourcing implementation starts with aggregates. You define a class, give it a method for each command, mutate internal state when events are applied, and wire it all up with a framework. This works. Countless systems have been built this way. But if you step back and ask what Event Sourcing actually needs at its core, the answer is surprisingly minimal.

The Decider pattern, introduced by Jérémie Chassaing, strips Event Sourcing down to three functions. No classes. No inheritance. No framework. Just three pure functions that capture everything an event-sourced component does. It is one of the most elegant ideas in the Event Sourcing space, and once you see it, it changes how you think about the entire approach.

Introducing the MCP Server 1.0 for EventSourcingDB

AI-powered agents are no longer a future promise. They are here, embedded in development workflows, automating decisions, analyzing data, and helping teams move faster. Tools like Claude, ChatGPT, and Gemini have become part of the daily toolkit for developers and architects alike. But until now, these agents had no way to talk to your event store. They could reason about code, summarize documents, and generate queries, but they could not read your events, explore your subjects, or run an EventQL query against live data. The most valuable data source in an event-sourced system was invisible to AI.

Today, we are changing that. We are releasing the MCP Server 1.0 for EventSourcingDB, an extension that connects large language models directly to a running EventSourcingDB instance. It implements the Model Context Protocol, an open standard that defines how AI models discover and invoke external tools. With this release, your AI agents can read events, write events, browse subjects, inspect event types, register schemas, and run EventQL queries, all through natural language.

All Models Are Wrong, Some Are Useful

"All models are wrong, but some are useful." The statistician George Box wrote this in 1976, and it remains one of the most underappreciated truths in software engineering. We spend weeks, sometimes months, trying to build the perfect domain model before writing a single line of code. We draw diagrams, debate naming, argue about boundaries. The intention is good: get it right upfront so you don't have to fix it later.

But the pursuit of the perfect model is itself a trap. Models are always incomplete, always a simplification of a reality that is too complex to capture fully. The question was never whether your model is right. It was always whether your model is useful enough to start, and whether you know how to evolve it when reality teaches you what you missed.

It Was Never About the Database

We build a database. We spend our days thinking about storage engines, query languages, and wire formats. We obsess over write throughput, replay performance, and consistency guarantees. This is what we do, and we care deeply about getting the technical details right.

But when we look back at the most impactful conversations we've had with teams adopting Event Sourcing, they were never about technology. They were about how people work together. About how a team that had been talking past each other for months suddenly found a shared vocabulary. About how a business process that had been invisible for years became something everyone could see, discuss, and improve. That is the story we want to tell today.

One Line at a Time

When you build a database for Event Sourcing, one of the early design decisions is deceptively simple: how do you send data from the server to the client? JSON is the obvious answer. Every language has a parser, every developer knows the format, and every HTTP client handles it out of the box. But standard JSON has a fundamental limitation that becomes a showstopper the moment you deal with event streams.

We chose NDJSON as EventSourcingDB's wire format for streaming data. It's not a new technology. It's not exciting. It's barely even a specification. But it turned out to be exactly the right choice, and the story of how we got there is worth telling.

The Snapshot Paradox

When developers discover Event Sourcing, one of the first concerns that arises is replay performance. "What if a subject accumulates thousands of events? Won't rebuilding state become painfully slow?" The answer that usually follows is snapshots. Store the current state periodically, and start replaying from there instead of from the beginning. Problem solved, right?

Not quite. In our experience, snapshots are one of the most overrated concepts in the Event Sourcing toolbox. They solve a problem that rarely exists, and when they seem necessary, they often point to a deeper issue that snapshots merely paper over. That's the paradox: the feature designed to optimize performance frequently masks a modeling mistake that, if fixed, would make the optimization unnecessary.

Consistency Is a Business Decision

You have probably heard of eventual consistency. The short version: in a distributed system, when data changes in one place, other parts of the system might not see that change immediately. For a brief moment, different components have different views of the truth. Eventually, they all catch up. Eventually, they all agree. But not instantly.

This concept makes many developers nervous. "Eventually consistent" sounds like "temporarily wrong." It sounds like a bug waiting to happen. In German, it gets even worse: "eventual consistency" is often translated as "eventuell konsistent," which means "possibly consistent," implying the data might never be correct. No wonder people reach for stronger guarantees.

But "eventual" in English means "ultimately" or "in the end," not "possibly." Eventual consistency means the system will become consistent, given enough time. The question is not whether consistency happens, but when. And here is the uncomfortable truth: your system is already eventually consistent. You just have not admitted it yet.

Training AI Without the Data You Don't Have

Tesla's self-driving cars have driven hundreds of millions of miles on real roads. Impressive, right? But here is the problem: most of those miles are on sunny highways with clear lane markings and predictable traffic. The cars have seen thousands of variations of "blue sky, straight road, normal behavior." What they have not seen, or at least not nearly enough, is the moose that jumps in front of your car at 2 AM on a snow-covered country road in northern Sweden. That is the one-in-a-million scenario. And it is exactly the scenario where your AI needs to get it right.

This is not just a Tesla problem. It is a fundamental paradox of machine learning. For the normal cases, you have plenty of data. For the critical edge cases, you have almost none. Your fraud detection model has seen a million legitimate transactions, but how many sophisticated fraud attempts has it actually encountered? Your medical diagnosis system has processed countless routine cases, but how many rare diseases has it learned to recognize? The scenarios where your model failing has the highest cost are precisely the scenarios where you have the least training data.

Data Is the New Gold, Here's How to Mine It

Picture this: you work at a mid-sized e-commerce company. The marketing team needs customer purchase patterns. The logistics team needs order fulfillment timelines. The finance team needs revenue breakdowns by product category. All of this data exists somewhere in your organization. But when marketing asks the data engineering team, they get "file a Jira ticket." When logistics asks the backend team, they get "we can export a CSV next week." Everyone knows data is gold, but getting to it feels like mining with a spoon.

This scenario plays out in companies of every size, every day. The data is there. The value is obvious. But the organizational structure turns every data request into a negotiation. And the solutions we have built over the past two decades have not fixed the problem. They have made it worse.

DDD: Back to Basics

A few months ago, I wrote about what went wrong with Domain-Driven Design. In If You Apply DDD to DDD, You Won't Get DDD, I argued that the patterns became the goal, the terminology became a barrier, and the human work got buried under technical abstractions. That criticism stands. But criticism alone is incomplete.

Because not everything in DDD is noise. Strip away the pattern theater, the academic language, the certification industry, and you find a core that actually matters. Commands. Events. State. Aggregates. Ubiquitous Language. Bounded Contexts. These concepts are worth understanding, worth keeping, worth building on. This post is about that core.