Blueprinting the System: An Interview on Event Modeling¶
Golo: Adam, you created Event Modeling and have been teaching and refining it for years now. You are also the founder of AdapTech Group and have worked with countless teams on adopting this approach. Before we dive into the method itself, can you tell a little about what problem you were originally trying to solve? Was there a specific frustration with existing approaches that led you to develop something new?
Adam: The frustration really boiled down to one thing: the design gap.
In almost every other engineering discipline, whether you're building a bridge or a house, you have a blueprint. You have a visual representation that everyone can look at to understand exactly what is being built. But in software, we've spent decades trying to build complex systems using nothing but "piles of tickets." And worse, different tickets for different roles.
I was seeing the same patterns of failure everywhere. Agile gave us a backlog of disconnected tasks, but no cohesive vision of how information actually flows through the system over time, trading the "onerous" design of RUP and UML for a complete lack of design. Meanwhile, our industry kept falling in love with abstractions that are inherently subjective and become a compromise the moment they need to be shared. Specifications for traditional systems ignored the benefits of minimal responsible coupling via an immutable ledger of events, and traditional databases were designed around "current state," showing you what is there but never how it got there. That lack of causation makes it much harder to audit, scale, or explain the system. On top of all that, there was a widespread inability to estimate software development efforts, and I knew I could give better budgets and guarantees to our clients.
Ultimately, I realized that if we could just storyboard the system, like a movie script or a screencast of the future, we could bridge that gap between business and tech by having a more universal and human-friendly notation.
Golo: That sounds like a fascinating challenge. Meanwhile, Event Modeling has gained significant traction in the Event Sourcing and DDD community. When you first introduced it, did you expect it to resonate this broadly, or was it initially something you built for your own teams?
Adam: It was a bit of a "secret weapon" for us at AdapTech before it was a global community thing. I didn't initially set out to create a new "industry standard." I just wanted projects that didn't bleed money and our developers from burning out due to scope creep and "unforeseen" complexity.
It resonated broadly when I published the initial article in 2018 to differentiate it from other approaches, since I was using the notation to explain event sourcing and other concepts in different tech communities. Hacker News readers up-voted it to the top story for that day.
It turns out that everyone, regardless of their tech stack, is tired of the "telephone game" requirements. People want to see the whole picture. They want to see the "movie script" of their system before they spend millions of dollars filming it.
Event Modeling in a Nutshell¶
Golo: That's quite a broad audience. Let's say someone reads this who has heard the name but never tried Event Modeling. If you had to explain the core idea in two minutes, what would you tell them?
Adam: If I've only got two minutes, I'm going to skip the "tech-speak" and get straight to the point: Event Modeling is the movie storyboard for your system.
Most software projects fail because everyone is looking at a different piece of the puzzle. Developers are looking at code, managers are looking at JIRA tickets, and the CEO is looking at a slide deck. Event Modeling puts everyone in the same room looking at one blueprint that shows how information moves through the system over time using concrete examples.
Think of your system as a timeline. From left to right, we map out exactly what happens, step-by-step. We use four simple building blocks, and that's it:
- The Screen (UI): What does the user actually see? No high-fidelity designs yet, just enough to show the information.
- The Command (Blue): The user's intent. "I want to book this room." It's the trigger for change.
- The Event (Orange): The immutable fact. "Room Booked." This is the history of your system. Once it happens, it never changes.
- The State View (Green): The information the user needs to see to make their next decision or get confirmation that the system did what they wanted it to do.
Instead of arguing about "User Stories" (which are often just vague wishes), we storyboard the entire journey. If you can't draw the flow from a screen to a command, to an event, and then back to a view on another screen, you don't have a requirement. You have a guess. By the end of a session, you don't have a pile of "tickets"; you have a blueprint. It's so clear that a developer knows exactly what to build, and the business owner knows exactly what they're paying for. We're moving software from "creative writing" back to engineering.
Golo: I like that image of a blueprint versus a pile of tickets, because it makes the whole idea much more tangible. To me, that visual timeline connecting commands, events, and views has always been the standout feature of Event Modeling. What makes this format so effective compared to, say, a list of user stories or a traditional requirements document?
Adam: Context. When you use a list of user stories, you're looking at a "pile of tickets." It's like being handed 500 random frames from a movie and being asked to tell the story. You might understand what happens in Frame 42, but you have no idea how we got there or where we're going next.
The blueprint changes the game because it respects the one thing software cannot escape: Time.
It kills the "telephone game" that plagues traditional requirements. A business analyst writes a "shall" statement, a developer interprets it into a class diagram, and a tester tries to guess what the original intent was. With the blueprint, we have a shared language. We're all looking at the same timeline. If a stakeholder sees a "Room Booked" event but realizes there's no "Payment Processed" event before it, they can point at the gap. You can't do that with a 40-page Word doc or a Jira backlog.
It also exposes the "magic." User stories are famous for hand-waving. A story might say: "As a user, I want to receive a personalized recommendation." Great. How? In an Event Model, you have to show the work. If you can't draw the line from the data to the screen, the feature doesn't exist. It forces us to deal with reality early, rather than discovering "missing pieces" two weeks before launch.
The blueprint is also "grok-able" at scale. You can walk up to a 20-foot Event Model on a wall (or a massive Miro board) and understand a complex banking system in ten minutes. You just follow the arrows. Try doing that with a Jira backlog. You'll be clicking "Next Page" for three hours and you still won't know how the "Interest Calculation" actually affects the "Monthly Statement." The blueprint gives you spatial memory, you remember that the "checkout logic" is "over there on the right," which makes navigation and mental modeling effortless.
Finally, it makes integration visible. We build the "Login" story, then the "Profile" story. But software is about how those things integrate. The blueprint shows the integration points as first-class citizens. We see exactly how an event in the "Shipping" slice affects a view in the "Customer Support" slice.
The Language That Bridges the Gap¶
Golo: We recently wrote about how Event Sourcing creates a shared language between technical and non-technical people. In your experience, how does Event Modeling change the dynamic in a room when developers and domain experts sit down together? Is there a moment where you typically see the "click" happen?
Adam: The "click" is usually audible. It's that moment in a workshop when the tension in the room just… evaporates.
Before Event Modeling, you'd have the "business" side on one side of the table and "tech" on the other. The business people are frustrated because they feel like they're shouting into a void, and the developers are frustrated because they're being asked to build something that hasn't been defined yet. It's a game of "Guess what I'm thinking."
The click usually happens during the Storyboarding phase. We've done the messy brainstorm of orange sticky notes (Events), and now we're starting to align them on the timeline. The moment of realization comes when a domain expert, maybe a shipping manager or an accountant, points at the screen and says:
"Wait, if that event 'Order Shipped' happens there, how does the customer know their tracking number? We don't have a screen for that."
That is the click. For the first time, the business person isn't just a "guest" in a technical meeting; they are the lead architect. They realize that the orange sticky notes are the reality of their business, and the blueprint is the first time they've ever seen their own company's logic laid out in a way that actually makes sense.
The dynamic shifts fundamentally from that point on. Developers stop saying "don't worry about how it works" and start saying "let's look at the timeline." The conversation moves from abstract trust to empirical evidence. Since we're all looking at the same 2D map of information moving through time, there's no room for "interpretation." If a step is missing, it's a physical hole on the wall. I've seen CEOs who haven't looked at a line of code in 20 years get excited because they can finally "read" the system. They realize they can audit the logic themselves without needing a translator.
When everyone in the room realizes that Events are the ultimate source of truth, they stop arguing about "classes" and "databases" and start talking about the journey. That's when you stop being a "dev shop" and start being an engineering team.
Golo: That sounds impressive, and I can imagine very well how that changes the conversation. Getting non-technical stakeholders genuinely involved in system design is one of the hardest problems in our industry, and what you're describing sounds very promising. Can you tell us a bit more about how that plays out in practice? Do you have an example where Event Modeling brought people into the conversation who would normally stay out?
Adam: It's a classic problem: we walk into a room and start talking about "Encapsulation," "Microservices," and "Aggregates," and the business stakeholders, the people who actually know how the money is made, just tune out. They feel like they're being lectured in a language they didn't sign up for.
Event Modeling fixes this by using a lexicon of four things, namely the screens, commands, events, and state views we already talked about. That's it. If you can understand a sticky note, a timeline, and a screen, you're an architect. We stop talking about "classes" and start talking about Events (facts that happened). We stop talking about "API endpoints" and start talking about Commands (what the user wants to do). When you lower the barrier to entry like that, you aren't "dumbing it down," you're actually increasing the rigor because the people with the most knowledge can finally participate.
I often see this with Accountants or Compliance Officers. These are people who are traditionally "bored to death" by tech meetings. I remember one session for a large logistics firm where we were modeling the "Refund" process. The developers had a complex diagram showing "State Machines" and "Status Flags." The CFO was in the room, looking confused.
We switched to an Event Model and drew the timeline: Payment Received (Event), then Refund Requested (Command), then Refund Issued (Event). The CFO suddenly stood up, pointed at the gap between the request and the issuance, and said: "Wait. In our world, you can't just 'Issue a Refund.' You need an 'Audit Log Entry' and a 'Tax Reversal' event first, or we're breaking the law."
The developers hadn't even thought of that. Because we weren't hidden behind technical jargon, the person who understood the legal reality of the business could see the "hole" in the timeline. That's the power of the blueprint: it turns the domain expert into the lead validator.
Humans are built for storytelling. We've been telling stories around fires for 50,000 years; we've been writing Java for about 30. When you show a timeline, you're tapping into the way the human brain naturally stores information. It moves the conversation from "How do we code this?" to "Is this story correct?"
Pragmatic Modeling and the Flat Cost Curve¶
Golo: That raises an interesting question about correctness. We've been writing about the idea that all models are wrong but some are useful, and that chasing the perfect model can become a trap. How does Event Modeling deal with this tension? Is the blueprint meant to be a finished artifact, or does it evolve?
Adam: That famous quote by George Box, "All models are wrong, but some are useful," is practically the unofficial motto of Event Modeling.
The "trap" people are stuck in is trying to find the One True Model. In traditional systems, we've been conditioned to build a single "Canonical Model," one massive Customer class or one perfectly normalized database table that has to serve the entire system. The problem is that a "Customer" means something very different to the "purchase product" slice than it does to the "ship items" slice. When you try to force them to use one model, you end up with a model that can't be the best for both.
In Event Modeling, we stop chasing the "perfect" model and start building useful slices. Instead of one giant model, we have hundreds of tiny, purpose-built ones. In the blueprint, every State View (the green boxes) is specifically designed for one specific screen or one specific decision. You might have a "Header View" state view that contains exactly two fields: UserId and FirstName. It's built (projected) from the event stream specifically for that one job. Because these models are "disposable" and derived from the events, they don't have to be perfect forever. If the UI changes, you don't migrate a massive database; you just throw away the old state view and project a new one from the events.
Is the blueprint a finished artifact? Never. We call it a Living Blueprint. Think of it like a movie script. You might have the "Final Script" before you start filming, but during production, you realize a scene doesn't work, so you rewrite it. The difference is that in software, we usually don't have a script at all, we just start filming and hope a story emerges.
The blueprint evolves, but it does so in slices. Because each slice is independent, you can change one part of the system without the "ripple effect" that kills traditional projects, or at least see exactly how far necessary changes propagate. When you stop trying to build a single model that "rules them all," the complexity of your system stays flat. You aren't managing a web of interconnected classes; you're managing a chronological flow of information. We don't want a "perfect" representation of reality; we want a complete representation of the information we need right now. If we find out later we need more info, the events are already there in the history. We just build a new "wrong for other purposes but useful for this one" model to handle it.
Golo: One of the unique claims of Event Modeling is that it enables a "flat cost curve" for development, meaning that adding the tenth feature should cost roughly the same as adding the first. That's a bold claim. Can you explain how that works and whether (or how) it holds up in practice?
Adam: In traditional development, you pay a "complexity tax." Feature #1 is cheap because the codebase is a blank slate. By feature #100, you're spending 90% of your time just trying not to break features #1 through #99. This happens because of hidden coupling. Your User class is connected to your Billing logic, which is connected to your Email service, and so on. It's a spiderweb.
We achieve a flat curve by moving from "spaghetti" to Vertical Slices. In an Event Model, a "feature" is a vertical slice of the blueprint. Because we use Event Sourcing, these slices don't share internal state. They only share a common history (the Event Store). Adding a new slice is like adding a new chapter to a book; it doesn't require you to rewrite the first five chapters.
Because the "View" for feature #10 is projected directly from the events, it doesn't care about the internal logic of feature #1. It only cares about the facts (Events) that happened. Since each slice takes a predictable average amount of effort to implement, and since that effort doesn't increase as the system grows, we can give you a price and a date and actually hit it. We've done this for years across banking, logistics, and retail. The "death march" only happens when your cost curve is a hockey stick. When it's a flat line, you just finish the work.
You also don't need to worry about the level of experience of the developers. The client pays the same amount if it's one day by a senior dev for a slice or two days by an intermediate. We can even include bug fixes for free because of this. This is practically impossible for traditional systems.
Event Storming, Domain Storytelling, and Event Modeling¶
Golo: Many of our readers will be familiar with Event Storming, created by Alberto Brandolini. How do you see the relationship between Event Storming and Event Modeling? Are they complementary, or do you see Event Modeling as a replacement?
Adam: Alberto is a friend, and Event Storming was a massive leap forward for the industry. It broke people out of the "data modeling" mindset and got them thinking about business processes.
In my practice and at AdapTech, Event Modeling is a replacement. We don't use Event Storming at all. We find that we can go from a blank wall to a full engineering blueprint using just the Event Modeling patterns. However, I recognize that many teams in the community use them together, as Event Storming predates Event Modeling and was heavily promoted by Thoughtworks.
For many teams, the transition looks like this: they use the "big picture" Event Storming approach first to find the "hotspots," the friction points, and the general flow of events in a chaotic, unstructured way. It's great for getting the "mess" out of people's heads. Once they have a general idea of the events, they switch to Event Modeling to add the rigor. They align those events on a timeline, add the Screens (UI), the Commands, and the Read Models. This turns the "brainstorm" into a specification that a developer can actually build from.
The reason I prefer to stay within Event Modeling from the start is that Event Modeling has a brainstorming stage that is sufficient to replace Event Storming, as the later steps make up the difference. Event Modeling is designed to be an engineering discipline. When we start with the timeline, we are building the "movie script" immediately.
Event Storming also often lacks the "User Experience" component. In Event Modeling, the Screen is a first-class citizen. If you don't know what the user sees, you don't know why they are issuing a Command. And because we have the timeline and the Read Models, we can check for "Information Completeness" very early. We don't just ask "What happened?" (Events); we ask "What did the user need to see to make that happen?" (Views). Is there a source for each data point?
If your team is struggling to even talk to each other, an Event Storming session is a great icebreaker. But if you want to build software with a predictable cost and a clear design, you need the blueprint. Whether you start with Storming or go straight to Modeling, the end goal has to be that organized timeline where every piece of information is accounted for.
Golo: Domain Storytelling is another approach that focuses on shared understanding through visual modeling. Where do you see the overlaps and differences with Event Modeling?
Adam: We're fighting the same war against "The Great Wall of Requirements Documents." The overlap is obvious: we both believe that storyboarding is the best way to extract knowledge from domain experts. We both want to see the "Who," the "What," and the "In what order." But once you get past the surface, the two approaches are optimized for very different outcomes.
Both methods use a visual language to bridge the gap between business and tech. If you look at a Domain Storytelling board and an Event Model, they both tell a story from left to right (usually). They both prioritize the business process over technical abstractions like "databases" or "classes."
Domain Storytelling is fantastic for the "Discovery" phase, understanding the mess of a manual process. But it often stops where the "Engineering" begins. In Event Modeling, we aren't just showing that "The Clerk sends the Invoice." We are showing the screen the clerk used, the command (intent) sent to the system, the event (fact) recorded in the history, and the read model (view) that updates so the customer can see their balance. Because Event Modeling tracks the state of information with such rigor, it's much harder for a developer to misinterpret it. Domain Storytelling tells you what the story is; Event Modeling gives you the script, the lighting cues, and the camera angles so you can actually film the movie.
If you're trying to figure out how three different departments currently interact to ship a package, Domain Storytelling is a great way to map that "human" layer. But the moment you start talking about building the software to automate it, you should switch to Event Modeling.
Lessons From the Field¶
Golo: When teams adopt Event Modeling for the first time, what's the most common mistake you see them make? Is there a pattern of things people get wrong initially?
Adam: The most common mistake? It's what I call "The Gravity of the Status Quo." People have been trained for decades to think in terms of tables and "piles of tickets," and those old habits are hard to break.
The biggest hurdle is thinking in "CRUD," not "Facts." Developers will try to name an event UserUpdated or SaveInvoice. That's not an event; that's a technical operation.
Then there's what I call the "Invisible Screen" trap. I see technical teams try to model the "backend" logic without drawing the screens. They think the UI is "fluff" or "someone else's job." But the Screen is the anchor. It provides the context for why a user is doing something. Without the screen, you don't know what information the user had when they issued a command. If you skip the UI, you lose the "why," and you'll inevitably miss requirements. Even fully automated systems need screens that serve as dashboards for operators to monitor the system.
Another common mistake is implementation leakage. Teams start arguing about whether to use Kafka, RabbitMQ, or a specific SQL schema halfway through the modeling session. The rule should be: no implementation talk. The moment you start talking about "Microservices" or "API endpoints," you've lost the domain expert. Event Modeling is about the movement of information, not the pipes it travels through.
Finally, teams often ignore "Information Completeness." A team will have a Command (e.g., ShipOrder) but they haven't modeled the View that gives the user the ShippingAddress. The test is simple: you must be able to trace the data. If a Command requires X, but there is no previous Event or UI that provides X, your model is "broken." Most teams realize this two months into coding; with Event Modeling, you should realize it in two minutes.
Golo: You've worked with teams across many industries and sizes. Is there a project or moment where Event Modeling made a particularly dramatic difference, something you still think about?
Adam: One project that stands out, and it's quite far from the world of typical software, is the Woodsmith Mine near Whitby in England. It's one of the deepest mines in Europe, designed to extract high-grade polyhalite fertilizer from shafts nearly 1,600 meters deep.
What made this particularly dramatic wasn't just the depth, but the constraints. Because it's located in the North York Moors National Park, the entire project had to be essentially "invisible." The centerpiece is a record-breaking 37-kilometer (23-mile) underground tunnel housing a single-belt conveyor system (the Mineral Transport System) to move ore directly to Teesside, avoiding thousands of truck trips on the surface.
It's rare to see Event Modeling applied to a physical mine, but it fit perfectly for a few key reasons. Traditional mining relies on Gantt charts, which tell you when things happen. Event Modeling shows why they happen. In a 37km tunnel, an "event" like a sensor detecting heat on a roller must trigger a specific chain of commands, slowing the belt, alerting maintenance, across miles of infrastructure. To build and run this, they needed a digital twin. Event Modeling allowed engineers to map out the "state" of the mine at any point on a timeline. This was crucial for the interlocking systems of the tunnel boring machine (TBM), the shaft-sinking roadheaders, and the conveyor installation.
Mining usually has a massive wall between the "Software/IT" and the "Mechanical/OT." Event Modeling provided a common language that both the software engineers writing control systems and the mechanical engineers building the belt could actually understand. The collaborative nature of the model allowed people from completely different areas of expertise to review a high-risk project and mitigate those risks before a single stone was turned.
What Comes Next¶
Golo: That's a remarkable example, especially because it's so far from typical software projects. Where is Event Modeling heading from here? Are there aspects of the method you're still actively developing or rethinking?
Adam: Yes, Event Modeling is constantly evolving. But unlike most other concepts, Event Modeling is not likely to add components. It's more about refinement and aiming for the simplest thing that could possibly work and to make it more globally accessible.
One example is how we've standardized the specifications. In the early days, the Given-When-Then (GWT) specifications for Command slices and the Given-Then specifications for state view slices were asymmetric. This was due to the origin of Event Modeling in Event Sourcing. We now treat both as GWT, with the state view getting a "When" for the query filtering that occurs to extract exactly what's needed. This makes it much easier to apply the same testing regimen to both types of slices.
The other major evolution is from an external force: AI. The industry is currently being taken over by agents that are taking the programming jobs. Just as Event Modeling slices were perfect requirements for developers, they are also excellent specifications for AI agents, which are now shifting toward Specification Driven Development. And, of course, AI is incredibly good at creating the Event Models themselves.
Ultimately, I see the future of Event Modeling as a new "programming language." We're making another jump in the level of abstraction in programming, much like we moved from assembler to compilers. We are moving toward a world where the model is the code.
Golo: If someone reads this and wants to try Event Modeling for the first time tomorrow, what's the single best starting point you'd recommend?
Adam: It all started with the original Event Modeling article, which is still in its original form on eventmodeling.org. From there, searching for "Event Modeling" on YouTube will lead to various presentations, and the Event Modeling and Event Sourcing podcast.
Golo: Adam, thanks so much for taking the time. I'm looking forward to seeing where Event Modeling goes from here.
Adam Dymitruk is the founder and CEO of AdapTech Group, a Canadian consulting company focused on event-driven systems. For more information or to get in touch, visit the AdapTech Group website.