Skip to content

Designing Aggregates

This guide explains how to design aggregates in event-sourced systems that use EventSourcingDB. It covers the conceptual role of aggregates, how they relate to subjects and events, and how to apply consistent modeling principles that ensure correctness, isolation, and long-term maintainability.

Aggregates are a core pattern in event sourcing. They define boundaries of consistency and control how and when events are emitted. In EventSourcingDB, aggregates are not explicitly stored — they emerge from how subjects and events are organized. Understanding how to model aggregates effectively helps you avoid common pitfalls and design systems that behave predictably under concurrency, scale, and change.

Aggregates Represent Consistency Boundaries

An aggregate is a cluster of domain objects that are treated as a single unit for consistency. It enforces business rules, validates commands, and produces events that represent meaningful outcomes. All events that belong to an aggregate are part of the same stream, and they are processed in a well-defined order.

In EventSourcingDB, each aggregate corresponds to a single subject. All events with the same subject belong to the same stream and are guaranteed to be ordered. This provides a natural and efficient way to represent aggregates: one stream per aggregate instance.

For example:

  • The subject /books/42 might represent a Book aggregate
  • The subject /users/17/orders/9001 might represent an Order aggregate

This one-to-one mapping between aggregates and subjects allows you to reason clearly about where decisions happen, where invariants are enforced, and how state evolves over time.

Aggregates Decide, Events Record

Aggregates are responsible for deciding what should happen. They interpret incoming commands, check business rules, and produce one or more events that reflect successful changes. These events are then recorded in EventSourcingDB — where they become immutable facts.

It is important to separate this decision logic from the act of writing events. In most systems, aggregates exist in memory or in application code. They are loaded by replaying past events, and then used to decide whether a command is valid and what new events should result.

This separation has two key benefits:

  • It keeps business logic pure and testable
  • It ensures that only validated events are stored

EventSourcingDB does not store aggregates as stateful objects — it stores only events. The aggregate is reconstructed by replaying the event stream for its subject. This means that designing good aggregates is not about persistence — it is about modeling decisions and transitions clearly.

Aggregate Boundaries Must Be Clear and Stable

A well-designed aggregate should encapsulate a meaningful unit of decision-making. Its boundaries define what can be done atomically and what must be coordinated across multiple entities.

For example, a Book aggregate might handle events like book-acquired, book-borrowed, and book-returned. It enforces rules such as "a book cannot be borrowed if it is already borrowed." These rules are enforced within the aggregate — and since all events for a given subject are ordered and isolated, the logic can be implemented deterministically.

Avoid designing aggregates that span too much of the domain. Aggregates should be small, focused, and aligned with transactional consistency. If two entities change together often, they might belong in the same aggregate. If they can change independently, they should be separate — even if they are related in the domain.

In EventSourcingDB, this means assigning them different subjects, and managing coordination between them explicitly, if needed.

Use One Subject per Aggregate Instance

Each aggregate instance should use a single subject, and that subject should never be reused for a different instance. For example:

/books/42         good – stable and unique for one book
/books/latest     bad – unstable and ambiguous
/orders/17/line   bad – unclear if this is a full order or a part of it

The subject acts as the identifier for the aggregate. It must be unique, predictable, and long-lived. Avoid reusing the same subject for multiple entities over time — doing so breaks the event history and makes reasoning about behavior difficult.

Because EventSourcingDB stores events indefinitely, subjects must remain meaningful even years later. Choose subject structures that are aligned with your domain model, not with transient UI or API conventions.

Enforcing Consistency with Preconditions

EventSourcingDB supports preconditions to help enforce consistency at the aggregate level. Several preconditions are especially relevant for aggregates:

  • isSubjectPristine ensures that the subject has no events yet — useful for initialization
  • isSubjectPopulated ensures that the subject has at least one event — useful for update operations
  • isSubjectOnEventId ensures that no events have been added since a given point — useful for optimistic concurrency

These allow you to safely implement patterns like:

  • "Only create this book if it doesn't exist yet"
  • "Only borrow this book if it has been acquired first"
  • "Only apply this change if nothing has changed since the last read"

Used correctly, preconditions make your aggregates concurrency-safe, without requiring locks or centralized coordination. They are the event-sourced equivalent of conditional updates or version checks — and they are essential for correctness in distributed systems.

Aggregates Should Not Cross Streams

One common mistake is to let an aggregate operate on multiple subjects — for example, by trying to produce events for different entities in one step. While EventSourcingDB allows writing multiple events to different subjects atomically, this should be done only when truly necessary.

As a rule, each aggregate should be responsible for only one stream. Cross-aggregate coordination should happen through policies, sagas, or eventual consistency — not by merging responsibilities. This keeps your model clean, testable, and resilient to change.

If multiple aggregates need to interact, model them separately and coordinate via events. For example:

  • A Book aggregate emits book-borrowed
  • A Reader aggregate reacts by emitting reader-notified

This separation of concerns aligns with the principles of CQRS and DDD — and it helps your system remain loosely coupled and easier to evolve.

Aggregates and Snapshots

When aggregates accumulate many events over time, replaying the entire history can become expensive. In EventSourcingDB, you can use snapshots to store intermediate states and resume playback from a known point.

Snapshots are just special events — written to the same stream and tagged accordingly. They do not change the structure of the aggregate, but they improve performance when loading it.

Design your aggregates to support snapshots, especially for entities with high event volume or frequent access. This is not mandatory — but it is a practical way to keep your system fast and scalable.

Aggregates Are Behavioral Units, Not Data Structures

Finally, it is important to remember that aggregates are not just containers of data. They are behavioral units — responsible for interpreting commands and deciding what happens next. Their value lies not in the state they hold, but in the rules they enforce.

This mindset helps avoid treating aggregates like database tables. Instead, think of them as domain guardians: they know the rules, they track what has happened, and they emit the facts that describe new outcomes.

In EventSourcingDB, this means that every event written to a stream reflects a decision made by the corresponding aggregate. If your aggregates are well-designed, the event log becomes a reliable source of business history — not just system activity.

When Aggregates Become a Limiting Factor

While aggregates are a powerful tool for modeling consistency boundaries, they are not always the best fit – especially when consistency rules:

  • Depend on multiple subjects,
  • Change based on event content, or
  • Evolve over time.

In such cases, it may be beneficial to shift from static structure to dynamic conditions. EventSourcingDB supports this through Dynamic Consistency Boundaries, which allow you to define consistency declaratively at runtime using EventQL and preconditions like isEventQlQueryTrue.

This approach complements traditional aggregates and provides an additional tool for modeling complex or evolving consistency requirements.