Skip to content

Continuous Modeling, or What Happens to the Model on Tuesday?

Friday afternoon. The sticky notes on the wall are the densest they've been all week. Someone has redrawn a Bounded Context for the third time, and this time everyone in the room nods. Two product managers, three engineers, a domain expert, and a coach who's been keeping the conversation honest agree: this is what the system actually is. Phones come out, photos get taken. People shake hands and say things like finally, and now we know. The room empties.

On Tuesday, someone opens a pull request that touches the boundary you spent half of Friday redrawing. The PR description doesn't mention the model. The reviewer doesn't mention the model. The model is in three Miro boards, one of which is now read-only because somebody's account got rotated, and a Confluence page that hasn't been opened in four weeks. The reviewer approves the change. The model isn't wrong yet. It's just not anywhere the work can see it. Three months later, when a new question forces you back to the wall, no one quite remembers why the boundary was where it was. Did the workshop fail? It didn't. Something else did, and almost every team we talk to keeps making the same quiet mistake.

Why Models Evaporate

Models don't disappear because people are lazy or undisciplined. They disappear because the artifacts they live in are structurally invisible to the work that follows them. A photo of a wall is not a model. A Miro board behind two SSO logins is not a model. A Confluence page that doesn't show up next to your code is not a model. They're souvenirs. They're proof the workshop happened. They're not where the work happens, and so the work doesn't go there.

There are three structural reasons for this, and recognizing them is the difference between blaming the team and fixing the system. The first is that the model has no home in the place where the code lives. It sits on a different platform, behind a different login, with a different review culture, often owned by a different role. Crossing that gap is friction, and friction always loses to deadlines.

The second reason is that there's no trigger that forces the model to be touched when the code is touched. You can refactor a service that sits at the heart of a Bounded Context without ever opening the artifact that defines that context. The system permits it, the tooling permits it, the review process permits it. So you do.

The third reason is that there's no review gate for the model. Pull requests gate code changes. Tests gate behavior. Linters gate style. Nothing gates the model. Drift accumulates in the only place where it isn't checked, and by the time anyone notices, the model and the code disagree, and nobody quite remembers which one was the source of truth.

The Workshop Was Never the Point

Workshops are catalysts, not deliverables. The output of a great workshop is clarity in the moment, the kind that lives in the heads of the people who were there and in the photos they took on the way out. Clarity that isn't tended decays into memory, and memory decays into folklore. None of that is a failure of the workshop. It's the predictable outcome of treating a workshop as the model rather than as the moment a model became visible enough to draw.

This is also why we keep coming back to the idea that strategic work in Domain-Driven Design isn't a phase you complete but a discipline you practice. Two engineers having a thirty-second conversation about whether a Command really belongs in the Order Capture context is strategic work. So is a code review that asks why a new field appeared in a Read Model that already had a stable shape. So is a Refinement where someone says, out loud, this Story crosses a boundary I'm not sure we've drawn.

Modeling is a daily habit, or it's a quarterly ritual that doesn't survive contact with reality. The question isn't whether your team values modeling. Almost every team we work with values modeling. The question is whether modeling shows up on Tuesday. That's a structural question, not a cultural one, and it has structural answers.

What Continuous Modeling Actually Looks Like

Continuous Modeling means three things working together. None of them is dramatic. None of them requires a new ceremony. All of them require that the model be reachable from the work the team is already doing.

The first is the Modeling Moment in Refinement. Five minutes at the start of each Refinement, before any estimation conversation, where the team asks one question: does this Story touch a Bounded Context whose model isn't crisp right now? If the answer is no, the meeting moves on. If the answer is yes, the model gets opened, and the conversation about the Story happens in the context of the model, not after it. Five minutes per Refinement, not five days per quarter. The cumulative effect is that the model is touched dozens of times a year by the people who actually need it, instead of once at a workshop and then never again.

The second is the Model Diff in the Pull Request. When a code change moves a concept, the model moves with it. If a field's meaning shifts, that shift is reflected in the model, in the same PR. If a Bounded Context absorbs a responsibility from another, that absorption is visible in a diff that the reviewer can read alongside the code. This sounds heavy until you do it once. After the first time, it's lighter than not doing it, because the conversations that used to happen weeks later, in confused tones, happen now, in the PR, while everyone still has the context loaded.

The third is Model Ownership per Bounded Context. Not a committee, not a "domain governance board". One person per context whose name is on the model the way a maintainer's name is on a critical library. They don't have to be right. They have to be attentive. They notice when a PR touches their context, they ask the question that nobody else thought to ask, and they keep the language honest when nobody else has the energy. Ownership without authority is enough, as long as the ownership is visible.

The Quiet Test of All Three

There's a question you can ask of any of these practices to tell whether it will actually work in your team. Where does the model live? If the answer is Miro, Confluence, the slide deck Sarah put together for the board, or somewhere in Slack, none of the three practices will hold. Not because the team won't try, but because the practices ask people to reach across a boundary the rest of the workflow never reaches across. Reaching across a boundary on a Tuesday afternoon, to update a Miro board so that a PR can land, is the kind of friction that quietly defeats every well-intentioned attempt at discipline.

The practices work when the model is a file. A file that lives next to the code. A file that shows up in a PR diff when it changes. A file that a reviewer can scan in the same window as the code under review. A file that a new joiner can git clone along with the rest of the project. Once the model is a file, the question of where does it live? stops being a question. It lives where everything else lives. And once it lives there, the three practices stop feeling like extra work and start feeling like the work itself.

The Substrate Matters More Than the Discipline

We've talked about the importance of moderation in breaking out of technical thinking, and about the choice of language in Ubiquitous, But in Which Language? Both of those posts assume the model exists somewhere a team can return to. The discipline they describe is real, but it's downstream of a more basic question: does your model have a home, or doesn't it?

If it doesn't, no amount of moderation will save it. The best workshop in the world is a temporary lift. If the artifact it produces doesn't survive the week, the lift dissipates and the team is back where it started, with the same vocabulary mismatches and the same hidden assumptions, slightly older. Discipline can't fix substrate. Substrate has to come first.

This is also why we built ESDM the way we did. The premise of a model-as-files isn't a feature. It's the precondition for everything else we want teams to be able to do, including the practices in this post. We won't dwell on ESDM here, because this post isn't about a tool. It's about an obvious-once-you-see-it observation: a model that doesn't live where the code lives won't survive contact with the code. That's all. Whatever you use to fix that, fix it.

What Changes When the Model Stays Alive

Teams that practice Continuous Modeling describe a handful of effects, and most of them are second-order. The first is that strategic drift becomes visible early. When a Bounded Context starts absorbing responsibilities that don't belong to it, somebody notices in the third or fourth PR, not the thirtieth. The model and the code disagree, and the disagreement is visible in a place where disagreements get resolved.

The second is that onboarding gets faster, and not by a small amount. A new joiner who can read the model alongside the code, in the repository they just cloned, has access to context that used to live in long pairing sessions and folklore. We've seen teams cut weeks off the time it takes a senior engineer to feel confident in a domain they'd never seen before. Not because the model explains everything. Because the model frames everything.

The third is that refactorings get smaller. Teams that update the model in lockstep with the code make fewer big-bang refactorings, because the small ones don't accumulate into a big one. A field rename, a context split, a Read Model that finally outgrows its parent: each becomes a normal-sized PR, instead of an architectural conversation that nobody had time for last quarter and so didn't have.

The fourth is harder to measure but easier to feel. The conversations get sharper. When the model is in front of you, the team stops arguing about what something means and starts arguing about what something should be. The first kind of argument is unwinnable. The second kind is the work.

The Real Test

The test of a model isn't whether it looked beautiful at the end of the workshop. It isn't whether the photo on the wall got reposted on LinkedIn. The test of a model is whether, six months later, somebody can ask why is the boundary here? and the model still answers. If it does, modeling worked. If it doesn't, you ran a workshop, and the workshop ran out.

Most teams already know this in the abstract. What they often don't know is that the gap between knowing it and doing it isn't a gap of discipline. It's a gap of substrate. Once the substrate is right, the discipline shows up almost on its own, because the path of least resistance leads through the model instead of around it. Once the substrate is wrong, no amount of willpower will route the work through an artifact the work can't see.

If your next Refinement could open the model in the same window as the Stories you're discussing, you're already most of the way there. If it can't, that's the thing to fix first. Everything else in this post is downstream of that single change.

If you'd like to see what a model-as-files actually looks like in practice, head over to esdm.io and walk through a small example. It's the most concrete way to feel the difference between a model that lives in the same world as your code and a model that lives somewhere your code can't reach.

And if you'd like to talk to us about how Continuous Modeling could fit into your team's day-to-day, or about anything else in the world of Event Sourcing, CQRS, and Domain-Driven Design, write to us at hello@thenativeweb.io. These are the conversations we like having most.