Skip to content

Read-Model Consistency and Lag

This guide explains how to manage the consistency and freshness of read models in event-sourced systems built on EventSourcingDB. While event sourcing allows for clean separation between write operations and derived state, it also introduces the challenge of keeping projections in sync with the event log — especially in distributed environments where reads and writes are decoupled.

Read models are views on top of the event stream. They provide optimized, queryable representations of domain state and enable applications to respond quickly to user interactions or API requests. However, because they are built asynchronously, there is always a potential for delay between an event being written and its corresponding change appearing in the read model. This delay is known as read-model lag.

Why Lag Happens

Read models are typically updated by event handlers or background processes that consume events from EventSourcingDB using its observe endpoint. These processes read the event stream, apply transformations, and persist the resulting state in a separate storage system, such as a database or cache.

Lag can occur for several reasons: the handler may be behind due to processing time, it may have temporarily failed, or the update pipeline may include batching, retries, or network latency. In systems with high throughput, even a short processing delay can cause the read model to be out of sync with the latest events.

The gap between the write model and the read model is not an error — it is an expected consequence of asynchronous processing. But depending on the application, it may or may not be acceptable.

Measuring Lag

To reason about consistency, you need to measure lag. This can be done by tracking the latest event ID or timestamp that the read model has applied, and comparing it with the current state of the event stream.

If your application exposes the current head of the read model, you can monitor how far behind it is relative to the latest event in the database. This difference can be measured in time (e.g. seconds of delay) or in number of events (e.g. items not yet processed). Both are useful depending on the context.

For example, in financial systems, even a few seconds of delay may be unacceptable. In analytics or reporting, several minutes of lag might be fine.

Designing for Eventual Consistency

In most cases, read models are eventually consistent. That means they will converge to the correct state given enough time, but they are not guaranteed to be fully up to date at any given moment.

This trade-off is fundamental to the architecture of event sourcing and CQRS. It enables scalability, decoupling, and independent evolution of system components. But it also requires careful design.

Applications that rely on read models must be aware of this consistency model. They should not assume that a just-written event will be immediately reflected in the view. Instead, they should either tolerate the delay or provide mechanisms for active refreshing, feedback, or fallback.

Avoiding Stale Reads in Critical Flows

When consistency is critical — for example, when confirming a transaction or validating a business rule — relying solely on a possibly stale read model is risky. In such cases, there are several alternatives:

One option is to perform a write-side check using the event stream itself. For example, instead of querying a read model to see if a user has already registered, check whether an event of type user-registered exists for the corresponding subject.

Another option is to implement synchronous validation before committing the write. While this goes against the principle of complete decoupling, it may be acceptable in cases where correctness is more important than throughput.

Alternatively, you can expose the result of a write operation directly, instead of relying on the read model to reflect the change. For instance, after submitting a form that emits an event, return a success response based on the event's acceptance, not on the read model update.

Handling Client Expectations

From a user perspective, lag can be confusing. A user clicks a button, the system confirms the action, but the screen still shows the old state for a few seconds. This gap undermines trust and usability, even if the backend is working correctly.

To bridge this gap, many systems implement optimistic updates in the frontend — they show the expected state immediately, even before the read model catches up. This improves responsiveness, but introduces new complexity, especially when dealing with errors or race conditions.

Another pattern is progressive confirmation. The system acknowledges that the request was received and is being processed, and then updates the UI once the read model is updated. This makes the delay explicit and more understandable.

Keeping Lag Acceptable

While eliminating lag completely is not realistic, minimizing it is often possible. Use efficient, incremental event processing. Avoid expensive operations in your event handlers. Design read models that can be updated quickly and in isolation. Track progress with checkpoints, so that reprocessing after failures is fast and bounded.

In EventSourcingDB, you can use the lowerBound parameter when observing events to resume processing from the last known position. This enables fault-tolerant, resumable handlers that avoid reprocessing already handled events.

If necessary, parallelize processing across independent subjects or event types. This allows the system to keep up even under load. Also consider health checks or monitoring dashboards that alert you when lag exceeds acceptable limits.

Designing With Lag in Mind

Read-model lag is not a flaw — it is a design trade-off. Understanding it, measuring it, and communicating about it are essential for building reliable systems. Some domains require strong consistency; others can tolerate a few seconds of delay. The architecture should reflect those needs.

Design your system so that business-critical flows are resilient to delay, and user-facing interfaces remain responsive even when the read model is a little behind. By embracing eventual consistency and designing read models that degrade gracefully, you make your application robust, scalable, and transparent.

EventSourcingDB gives you the tools to build these patterns — the rest is up to your application.