Skip to content

Perspectives

Great Minds Should Not Think Alike, They Should Think Together

The Event Sourcing, Domain-Driven Design, and CQRS community is full of brilliant people. Thoughtful practitioners, passionate speakers, prolific authors. People who genuinely care about building better software through deeper domain understanding. And yet, for a community built around the idea of shared understanding, we have a remarkably hard time understanding each other.

The irony is hard to miss. We preach Ubiquitous Language as a cornerstone of Domain-Driven Design, insisting that teams must develop a precise, shared vocabulary for their domain. But when it comes to our own discipline, we can't even agree on what to call the things we work with every day. And that's just the beginning.

REST in Peace

REST has become the de-facto standard for building web APIs. Almost every tutorial, framework, and job listing treats it as the obvious choice. But what most developers call "REST" has very little to do with what Roy Fielding described in his dissertation back in 2000. Today, REST typically means HTTP verbs, JSON payloads, and CRUD operations mapped to resources. And that is exactly where the problem begins.

Because once you strip away the buzzword, what remains is a thin wrapper around database operations, exposed over HTTP. We have spent years criticizing CRUD on the data level. It is time to look at what CRUD does to our APIs.

Hidden in Plain Sight: The Events You Forgot to Model

There's a very specific moment that most teams working with Event Sourcing eventually run into. Someone asks a seemingly simple question about the past: why did this happen, how often has that occurred, what would have been the case if things had gone differently. You open the event store, expecting the answer to be right there, because that's the promise, after all. The full history, nothing lost, everything reconstructible. And then you realize the event you'd need was never written. The information existed once, for a brief moment, and slipped away before anyone thought to catch it.

It's tempting to treat this as a checklist problem. Just write down the events you tend to forget, keep the list handy, refer to it during the next Event Storming. But that approach misses something important. The problem isn't that teams are lazy or careless. It's that the way we're taught to think about events quietly steers us away from certain kinds of events in the first place. If you want to stop forgetting them, it helps to understand why they vanish from the model to begin with.

Introducing DDD to Your Organization

You have read the books. You have watched the talks. You are convinced that Domain-Driven Design would help your team build better software. The models would be clearer, the communication with stakeholders sharper, the architecture more aligned with the business. There is just one problem: nobody else in your organization knows what DDD is, and nobody asked for it.

Introducing DDD is not a technical challenge. It is a cultural one. You cannot install it like a library or deploy it like a service. It requires changing how people think about software, how they talk about problems, and how they collaborate across disciplines. That takes time, patience, and a strategy that goes beyond "let me show you this cool pattern."

All Models Are Wrong, Some Are Useful

"All models are wrong, but some are useful." The statistician George Box wrote this in 1976, and it remains one of the most underappreciated truths in software engineering. We spend weeks, sometimes months, trying to build the perfect domain model before writing a single line of code. We draw diagrams, debate naming, argue about boundaries. The intention is good: get it right upfront so you don't have to fix it later.

But the pursuit of the perfect model is itself a trap. Models are always incomplete, always a simplification of a reality that is too complex to capture fully. The question was never whether your model is right. It was always whether your model is useful enough to start, and whether you know how to evolve it when reality teaches you what you missed.

It Was Never About the Database

We build a database. We spend our days thinking about storage engines, query languages, and wire formats. We obsess over write throughput, replay performance, and consistency guarantees. This is what we do, and we care deeply about getting the technical details right.

But when we look back at the most impactful conversations we've had with teams adopting Event Sourcing, they were never about technology. They were about how people work together. About how a team that had been talking past each other for months suddenly found a shared vocabulary. About how a business process that had been invisible for years became something everyone could see, discuss, and improve. That is the story we want to tell today.

Consistency Is a Business Decision

You have probably heard of eventual consistency. The short version: in a distributed system, when data changes in one place, other parts of the system might not see that change immediately. For a brief moment, different components have different views of the truth. Eventually, they all catch up. Eventually, they all agree. But not instantly.

This concept makes many developers nervous. "Eventually consistent" sounds like "temporarily wrong." It sounds like a bug waiting to happen. In German, it gets even worse: "eventual consistency" is often translated as "eventuell konsistent," which means "possibly consistent," implying the data might never be correct. No wonder people reach for stronger guarantees.

But "eventual" in English means "ultimately" or "in the end," not "possibly." Eventual consistency means the system will become consistent, given enough time. The question is not whether consistency happens, but when. And here is the uncomfortable truth: your system is already eventually consistent. You just have not admitted it yet.

Three Conversations Worth Having With Your CTO

You have built something good. The architecture is solid, the code is clean, the team knows what they are doing. But when you suggest a foundational change, like rethinking how data is stored, the conversation stalls. "What is the business case?" "What problem does this solve?" Fair questions. Hard to answer in a hallway conversation.

This post is for that conversation. Not the technical one (you already understand that), but the business one. Three problems that cost real money, that your CTO cares about even if they do not know the technical details, and that have solutions your team can implement.

Soft Delete Is a Workaround

Three weeks ago, Alex Buchanan published a thoughtful blog post about soft delete strategies. In "The challenges of soft delete", he describes the problem carefully and offers four creative solutions. His analysis is thorough, his examples concrete, and his engineering instincts sound.

His analysis is correct, his solutions well considered. But all four strategies have something in common: they optimize a problem that does not exist with a different architectural approach.

What Aviation Teaches Us About Auditing

A plane touches down at a busy airport. In 45 minutes, it will take off again. Between landing and departure lies a precisely orchestrated ballet: refueling, catering, cleaning, technical checks, crew handover, baggage handling, and passenger boarding. All of this happens in parallel, all under time pressure. And all of it is documented. Not as an afterthought, but as the core of the operation. We work with an airport where exactly this challenge exists: ensuring that every action during a turnaround is recorded, traceable, and tamper-proof.