All Models Are Wrong, Some Are Useful¶
"All models are wrong, but some are useful." The statistician George Box wrote this in 1976, and it remains one of the most underappreciated truths in software engineering. We spend weeks, sometimes months, trying to build the perfect domain model before writing a single line of code. We draw diagrams, debate naming, argue about boundaries. The intention is good: get it right upfront so you don't have to fix it later.
But the pursuit of the perfect model is itself a trap. Models are always incomplete, always a simplification of a reality that is too complex to capture fully. The question was never whether your model is right. It was always whether your model is useful enough to start, and whether you know how to evolve it when reality teaches you what you missed.
The Perfection Trap¶
We see this pattern in nearly every team that takes modeling seriously. The first workshop goes well. Events are identified, boundaries are drawn, everyone is excited. Then the doubts creep in. "What about this edge case?" "Should this be one aggregate or two?" "Are we sure this is the right event name?" The team returns to the whiteboard. And again. And again.
The longer you model in isolation, the more wrong you will be. Not because you are getting worse at modeling, but because you are missing the one thing that would actually improve your model: feedback from reality. You cannot discover every edge case by thinking about them. Some only appear when real data flows through a real system. Some only surface when a real user does something nobody anticipated.
In It Was Never About the Database, we talked about the immense value of modeling conversations. That value is real. But conversations need to lead somewhere. At some point, the most useful thing you can do is stop talking and start building, because the system itself will tell you things that no workshop ever could.
There is also a deeper reason why the pursuit of perfection is futile: even if you could create the perfect model today, it would not stay perfect. Requirements change. Business rules evolve. Markets shift. Regulations get introduced. The organization restructures. The context in which your model operates is not static. It moves, and your model must move with it. A model that was exactly right in January may be subtly wrong by June, not because anyone made a mistake, but because the world changed. Accepting this from the start is not defeatism. It is the foundation for building systems that last.
Good Enough to Start¶
If perfection is unattainable and the world keeps changing, what should you aim for instead? A model that is good enough to support the decisions you need to make right now.
In Event Sourcing, this means you need enough events to handle your first use cases. Not every event for every possible scenario. Not a complete taxonomy of everything that could ever happen in your domain. Just the events that let you process the commands you are building today.
The aggregate boundaries you draw are not permanent. They are a starting hypothesis about where consistency matters. If you get them wrong, you will notice: either the aggregate grows unwieldy, or you find yourself coordinating across boundaries in ways that feel forced. Both are signals, not failures. As we discussed in Versioning Events Without Breaking Everything, events can evolve. Schemas can change. New event types can be introduced alongside existing ones. The system is designed for this.
The practical test is simple: can you make decisions with this model? Can you determine whether an order should be accepted? Can you tell whether a payment is overdue? Can you enforce the business rules that matter for the feature you are building right now? If the answer is yes, you have enough. Start building. The model will tell you what it needs next.
The Model Learns With You¶
Something interesting happens when you move from modeling to implementation: you start learning faster. The abstract discussions from the workshop turn into concrete questions. "What happens when a customer changes their shipping address after the order is confirmed but before it ships?" In the workshop, this might have been a theoretical edge case. In code, it is a decision you have to make right now.
Real understanding comes from building, not from theorizing. Every event you implement, every command handler you write, every read model you build teaches you something about your domain that pure modeling could not. You discover that "shipping" is actually three distinct processes depending on the product type. You realize that "cancellation" means something different to the warehouse team than to the finance team.
These discoveries are not signs that your initial model was bad. They are signs that you are learning. Each one is an opportunity to refine your model with knowledge you could not have had before you started building. The team that ships a first version after two weeks of modeling and two weeks of building will have a deeper understanding of their domain than the team that spends two months modeling without writing code.
How to Refine Without Rewriting¶
One of the fears that drives the perfection trap is the belief that changing the model later will be expensive. In traditional systems, this fear is justified. Changing a database schema, migrating data, and updating every query that touches the affected tables is painful work.
Event Sourcing changes this equation fundamentally. Your event history is a record of what actually happened. When you refine your model, you do not have to guess how the system has been used. You can look at real events and ask: does this model still make sense given what we now know?
Need a new read model that reflects your updated understanding? Build it from the existing events. You do not start from zero. As we explored in CQRS Without the Complexity, the same events can feed many different projections. Adding a new one does not require changing the ones that already work. Your old understanding and your new understanding coexist, each derived from the same source of truth.
Need to introduce a new event type because you discovered a distinction your model did not capture? Add it. The existing events remain valid. They represent what happened under the old understanding. The new events represent what happens under the refined understanding. Event-sourced systems are built for evolution, not for perfection.
When the Model Tells You It's Wrong¶
Models do not fail silently. They send signals. Learning to recognize these signals is one of the most valuable skills a team can develop.
Events that are too generic are a common warning sign. If your system is full of OrderUpdated events instead of ShippingAddressChanged, DeliveryDatePostponed, or DiscountApplied, your model is hiding what actually happens. As we discussed in Don't Kill Your Users, generic event names strip your domain of meaning. If you find yourself writing Updated events, pause and ask: what specifically changed, and why?
Aggregates that keep growing are another signal. When an aggregate accumulates dozens of event types and its state object has twenty fields, it is probably doing too much. We explored this pattern in Your Aggregate Is Not a Table: the aggregate should be a focused consistency boundary for decisions, not a container for everything related to a concept. If it is growing, it likely needs to be split.
Conversations where people talk past each other are perhaps the most telling signal of all. When domain experts use one word and developers use another for the same concept, the model has drifted from the domain. When a product owner describes a process and the developer says "that's not how the system works," the model and the reality have diverged. These are not communication problems. They are modeling problems, and they are best fixed by returning to the events and asking: do our event names still match what the business calls these things?
None of these signals mean you failed. They mean the model is doing its job. A good model makes problems visible early, before they become expensive to fix. The cost of renaming an event is trivial compared to the cost of building features on a misunderstood process.
Start Wrong, Get Better¶
The best models are not designed once and frozen. They emerge through cycles of modeling, building, and refining. Each cycle deepens your understanding. Each cycle brings the model closer to the domain it represents. And each cycle is only possible because you had the courage to start with something imperfect.
Event Sourcing supports this way of working better than any other approach we know. Events capture what happened, and you can always reinterpret them with better understanding. Read models can be rebuilt. Event schemas can evolve. Aggregate boundaries can shift. The system remembers everything, even when your understanding changes.
George Box was right. All models are wrong. But the ones you build, test against reality, and refine based on what you learn are the most useful models you will ever have. Do not wait for perfection. Start with what you know, build something real, and let the model grow with your understanding.
If you want to explore modeling and Event Sourcing further, cqrs.com is a good starting point. And if you are in the middle of a modeling effort and would like a sparring partner to discuss boundaries, events, and the question of "is this good enough to start," we would love to hear from you at hello@thenativeweb.io.