Skip to content

In Practice

It's None of Your Business

Three weeks ago, we introduced Simple Mode, a new mode for EventSourcingDB that replaced expressive domain events with three universal operations: row-inserted, row-updated, and row-deleted. Many of you laughed. Some of you forwarded it to colleagues who were not sure whether to laugh or cry. But beneath the satire, the frustrations we exaggerated were real. Teams genuinely struggle with Event Sourcing, and the reason they give is almost always the same: "It's too complex. We just want to store data."

We have heard that sentence dozens of times. And every time, we have come to the same conclusion: the complexity is not the problem. The problem is a misunderstanding about whose job it is to make business decisions. Because right now, in codebases all over the world, developers are making those decisions every single day, quietly, implicitly, and almost always without realizing it. And that is where things go wrong.

A Skill for Easter: Teaching Claude to Speak EventSourcingDB

Two weeks ago, we introduced the MCP Server 1.0 for EventSourcingDB, connecting AI agents to your event store through the Model Context Protocol. It showed what becomes possible when LLMs can query events, inspect subjects, and run EventQL in natural language. But setting up a Docker container and configuring an MCP client is not always what you want when you just need to quickly interact with your database. Sometimes you want something lighter.

As a small Easter gift, we are releasing a Claude Code Plugin for EventSourcingDB. It is a set of Skills that teach Claude how to use the entire EventSourcingDB API. No SDK installation, no Docker container, no MCP configuration. You install the plugin, and start talking to your event store. Think of it as the lightest possible bridge between natural language and your events.

Debugging Event-Sourced Systems: A Detective's Guide

In a traditional CRUD system, debugging starts with a familiar question: "What is the current state?" You open the database, look at the row, and see that the order status is "cancelled." But you do not know why. Was it the customer? The payment provider? An automated process? The database shows you the crime scene, but not the crime. All you have is a body and no witnesses.

Event-sourced systems turn this on its head. Instead of inspecting the current state and guessing what went wrong, you follow the trail of events. Every change that ever happened is recorded, timestamped, and preserved. Debugging becomes less about guessing and more about reading. You are not a detective arriving at a cold case. You have a complete surveillance tape.

Naming Events Beyond CRUD

You open the event store and see UserUpdated. What happened? Did the user change their email? Did they accept the terms of service? Did an admin reset their password? The event name tells you nothing. It is CRUD in disguise, a technical label that hides what actually occurred.

Now consider OrderDeleted. Why was it deleted? Did the customer cancel? Did the seller reject it? Was it flagged for fraud? Behind the same technical verb lie completely different business realities, each with its own consequences, its own rules, its own downstream effects. The name you give an event determines whether your system tells a story or keeps a secret.

Training AI Without the Data You Don't Have

Tesla's self-driving cars have driven hundreds of millions of miles on real roads. Impressive, right? But here is the problem: most of those miles are on sunny highways with clear lane markings and predictable traffic. The cars have seen thousands of variations of "blue sky, straight road, normal behavior." What they have not seen, or at least not nearly enough, is the moose that jumps in front of your car at 2 AM on a snow-covered country road in northern Sweden. That is the one-in-a-million scenario. And it is exactly the scenario where your AI needs to get it right.

This is not just a Tesla problem. It is a fundamental paradox of machine learning. For the normal cases, you have plenty of data. For the critical edge cases, you have almost none. Your fraud detection model has seen a million legitimate transactions, but how many sophisticated fraud attempts has it actually encountered? Your medical diagnosis system has processed countless routine cases, but how many rare diseases has it learned to recognize? The scenarios where your model failing has the highest cost are precisely the scenarios where you have the least training data.

Data Is the New Gold, Here's How to Mine It

Picture this: you work at a mid-sized e-commerce company. The marketing team needs customer purchase patterns. The logistics team needs order fulfillment timelines. The finance team needs revenue breakdowns by product category. All of this data exists somewhere in your organization. But when marketing asks the data engineering team, they get "file a Jira ticket." When logistics asks the backend team, they get "we can export a CSV next week." Everyone knows data is gold, but getting to it feels like mining with a spoon.

This scenario plays out in companies of every size, every day. The data is there. The value is obvious. But the organizational structure turns every data request into a negotiation. And the solutions we have built over the past two decades have not fixed the problem. They have made it worse.

Predicting Failures Before They Happen

A machine fails. You know it failed. But do you know why? Traditional systems store only the current state: Temperature 72°C, RPM 1,200, last service three months ago. You see the end state, but not the journey. You see where the machine is now, but not how it got there. We work with a customer who builds digital twins for industrial machines, and they faced exactly this challenge.

Versioning Events Without Breaking Everything

Imagine a city library that has been collecting catalog cards for over a hundred years. In 1920, librarians recorded "Author" and "Title." In 1970, they added the ISBN. In 1990, "Author" became "Authors" (plural, to accommodate co-authors). In 2020, they introduced e-book formats and licensing information.

Here's the thing: the old cards are still there. You can't "update" a card from 1920. And yet, the modern library system must understand all of them, from the handwritten notes of a century ago to yesterday's digital acquisition.

Event sourcing faces the same challenge. Events are immutable facts. Once written, they stay forever. But requirements change, domains evolve, and mistakes get discovered. How do you version something that can't be changed?