Skip to content

Fundamentals

It Was Never About the Database

We build a database. We spend our days thinking about storage engines, query languages, and wire formats. We obsess over write throughput, replay performance, and consistency guarantees. This is what we do, and we care deeply about getting the technical details right.

But when we look back at the most impactful conversations we've had with teams adopting Event Sourcing, they were never about technology. They were about how people work together. About how a team that had been talking past each other for months suddenly found a shared vocabulary. About how a business process that had been invisible for years became something everyone could see, discuss, and improve. That is the story we want to tell today.

The Snapshot Paradox

When developers discover Event Sourcing, one of the first concerns that arises is replay performance. "What if a subject accumulates thousands of events? Won't rebuilding state become painfully slow?" The answer that usually follows is snapshots. Store the current state periodically, and start replaying from there instead of from the beginning. Problem solved, right?

Not quite. In our experience, snapshots are one of the most overrated concepts in the Event Sourcing toolbox. They solve a problem that rarely exists, and when they seem necessary, they often point to a deeper issue that snapshots merely paper over. That's the paradox: the feature designed to optimize performance frequently masks a modeling mistake that, if fixed, would make the optimization unnecessary.

Consistency Is a Business Decision

You have probably heard of eventual consistency. The short version: in a distributed system, when data changes in one place, other parts of the system might not see that change immediately. For a brief moment, different components have different views of the truth. Eventually, they all catch up. Eventually, they all agree. But not instantly.

This concept makes many developers nervous. "Eventually consistent" sounds like "temporarily wrong." It sounds like a bug waiting to happen. In German, it gets even worse: "eventual consistency" is often translated as "eventuell konsistent," which means "possibly consistent," implying the data might never be correct. No wonder people reach for stronger guarantees.

But "eventual" in English means "ultimately" or "in the end," not "possibly." Eventual consistency means the system will become consistent, given enough time. The question is not whether consistency happens, but when. And here is the uncomfortable truth: your system is already eventually consistent. You just have not admitted it yet.

Training AI Without the Data You Don't Have

Tesla's self-driving cars have driven hundreds of millions of miles on real roads. Impressive, right? But here is the problem: most of those miles are on sunny highways with clear lane markings and predictable traffic. The cars have seen thousands of variations of "blue sky, straight road, normal behavior." What they have not seen, or at least not nearly enough, is the moose that jumps in front of your car at 2 AM on a snow-covered country road in northern Sweden. That is the one-in-a-million scenario. And it is exactly the scenario where your AI needs to get it right.

This is not just a Tesla problem. It is a fundamental paradox of machine learning. For the normal cases, you have plenty of data. For the critical edge cases, you have almost none. Your fraud detection model has seen a million legitimate transactions, but how many sophisticated fraud attempts has it actually encountered? Your medical diagnosis system has processed countless routine cases, but how many rare diseases has it learned to recognize? The scenarios where your model failing has the highest cost are precisely the scenarios where you have the least training data.

Data Is the New Gold, Here's How to Mine It

Picture this: you work at a mid-sized e-commerce company. The marketing team needs customer purchase patterns. The logistics team needs order fulfillment timelines. The finance team needs revenue breakdowns by product category. All of this data exists somewhere in your organization. But when marketing asks the data engineering team, they get "file a Jira ticket." When logistics asks the backend team, they get "we can export a CSV next week." Everyone knows data is gold, but getting to it feels like mining with a spoon.

This scenario plays out in companies of every size, every day. The data is there. The value is obvious. But the organizational structure turns every data request into a negotiation. And the solutions we have built over the past two decades have not fixed the problem. They have made it worse.

DDD: Back to Basics

A few months ago, I wrote about what went wrong with Domain-Driven Design. In If You Apply DDD to DDD, You Won't Get DDD, I argued that the patterns became the goal, the terminology became a barrier, and the human work got buried under technical abstractions. That criticism stands. But criticism alone is incomplete.

Because not everything in DDD is noise. Strip away the pattern theater, the academic language, the certification industry, and you find a core that actually matters. Commands. Events. State. Aggregates. Ubiquitous Language. Bounded Contexts. These concepts are worth understanding, worth keeping, worth building on. This post is about that core.

Three Conversations Worth Having With Your CTO

You have built something good. The architecture is solid, the code is clean, the team knows what they are doing. But when you suggest a foundational change, like rethinking how data is stored, the conversation stalls. "What is the business case?" "What problem does this solve?" Fair questions. Hard to answer in a hallway conversation.

This post is for that conversation. Not the technical one (you already understand that), but the business one. Three problems that cost real money, that your CTO cares about even if they do not know the technical details, and that have solutions your team can implement.

Soft Delete Is a Workaround

Three weeks ago, Alex Buchanan published a thoughtful blog post about soft delete strategies. In "The challenges of soft delete", he describes the problem carefully and offers four creative solutions. His analysis is thorough, his examples concrete, and his engineering instincts sound.

His analysis is correct, his solutions well considered. But all four strategies have something in common: they optimize a problem that does not exist with a different architectural approach.

Predicting Failures Before They Happen

A machine fails. You know it failed. But do you know why? Traditional systems store only the current state: Temperature 72°C, RPM 1,200, last service three months ago. You see the end state, but not the journey. You see where the machine is now, but not how it got there. We work with a customer who builds digital twins for industrial machines, and they faced exactly this challenge.