Skip to content

What's New

One Database to Rule Them All

"Should we use one EventSourcingDB for all our services, or should each service have its own?" This question comes up in almost every conversation about service-based architectures. Teams want to keep things simple. One database sounds easier to manage than five. The appeal is understandable.

But here's the thing: this isn't a question specific to EventSourcingDB. It's one of the most debated topics in software architecture, regardless of database technology. Whether you're using PostgreSQL, MongoDB, or an event store, the fundamental question remains the same: should multiple services share a database, or should each service have its own?

If you've read Tolkien, you know that "One Ring to Rule Them All" didn't end well. The same is true for shared databases. What looks like elegant simplicity becomes a source of coupling, conflict, and constraint. Let's explore why.

Versioning Events Without Breaking Everything

Imagine a city library that has been collecting catalog cards for over a hundred years. In 1920, librarians recorded "Author" and "Title." In 1970, they added the ISBN. In 1990, "Author" became "Authors" (plural, to accommodate co-authors). In 2020, they introduced e-book formats and licensing information.

Here's the thing: the old cards are still there. You can't "update" a card from 1920. And yet, the modern library system must understand all of them, from the handwritten notes of a century ago to yesterday's digital acquisition.

Event sourcing faces the same challenge. Events are immutable facts. Once written, they stay forever. But requirements change, domains evolve, and mistakes get discovered. How do you version something that can't be changed?

... And Then the Wolf DELETED Grandma

Last week, I had the pleasure of speaking at the Software Architecture Gathering 2025 in Berlin. The conference is organized by the iSAQB (International Software Architecture Qualification Board) and brought together around 400 attendees from numerous countries. My talk, titled "... And Then the Wolf DELETED Grandma," explored why CRUD falls short when modeling real-world processes, and about 120 people joined me in the room to discuss fairy tales, databases, and the limits of our industry's favorite paradigm.

The response was overwhelming. Conversations continued long after the session ended, and many attendees shared similar frustrations with CRUD in their own projects. This post is the written version of that talk: for everyone who couldn't be there, and for those who were and wanted to revisit the ideas.

18 Months of Events Fit on Four Floppy Disks

"Event Sourcing uses too much storage." We hear this all the time. The argument goes like this: since you never delete anything and only append new events, your storage requirements grow indefinitely. Eventually, you'll run out of space. It sounds logical. It's also almost always wrong.

The append-only nature of Event Sourcing is real. But the conclusion that this leads to storage problems is based on three fundamental misconceptions that we see over and over again. Let's examine them, and then look at real production data that might surprise you.

Event Sourcing is Not For Everyone

A few days ago, Martin Dilger published an article on LinkedIn titled "When Event Sourcing Doesn't Make Sense (And How to Know the Difference)". It's a thoughtful piece that addresses an important question: when should you not use Event Sourcing? The article sparked several private conversations, and one in particular revealed a confusion I see far too often.

Someone described a ride-sharing application where a driver continuously transmits GPS coordinates while traveling to pick up a passenger. The question was: are these GPS updates events? And if so, should they be stored using Event Sourcing? The answer reveals a fundamental distinction that many teams overlook: not everything that looks like data is an event, and not every event belongs in an event-sourced system.

Event-Driven Data Science: EventSourcingDB Meets Python and Pandas

Data analysis is more important than ever. Data science and AI have become essential tools for many companies. The tools keep getting better: more powerful models, faster computers, smarter algorithms.

But here's the problem: the underlying data is often garbage. And as always: garbage in, garbage out. The best models, the fastest computers, the smartest algorithms – none of it matters if your data doesn't tell the real story.

Exactly Once is a Lie

Imagine you're placing an order in an online shop. You click the "Submit Order" button. Nothing happens. You wait a few seconds. Still nothing. So you click again. And maybe once more, just to be sure. Finally, a confirmation page appears. You've successfully placed your order – or have you? Did you place one order, or three?

This scenario plays out millions of times every day across the internet, and it reveals one of the most persistent myths in distributed systems: the promise of exactly-once delivery. Message queues advertise it. Streaming platforms claim it. Enterprise architectures depend on it. But here's the uncomfortable truth: exactly-once delivery is impossible in distributed systems. The good news? That's perfectly okay, and there are practical ways to handle it.

Proving Without Revealing: Merkle Trees for Event-Sourced Systems

Imagine it's January 2026. You run a platform with millions of users. An auditor walks in with a specific request: "Show me proof that you captured a GDPR consent event for user #12847 on March 15th, 2024." You know the event exists – it's sitting in your event store. But here's the problem: you can't just hand over your complete event log. That log contains millions of events with sensitive customer data, financial transactions, business secrets, and personal information from thousands of other users.

If You Apply DDD to DDD, You Won't Get DDD

Domain-Driven Design (DDD) promises better software through a focus on the business domain and a shared understanding between developers and domain experts. That's the essence, distilled to one sentence. But if you actually apply this principle to DDD itself – asking what the domain is, what matters, what can be discarded – you won't end up with what we call "DDD" today. You'll end up with something much simpler.

So why has DDD, after more than two decades, never escaped its niche? Why do so many developers feel overwhelmed by it, confused by it, or think they're not smart enough for it? The answer is uncomfortable but clear: DDD fails at its own claim.