Skip to content

Building Event Handlers

This guide explains how to build event handlers in systems that use EventSourcingDB. It focuses on how applications can observe events, react to them reliably, and maintain processing state across interruptions. EventSourcingDB itself does not include a handler mechanism — it simply provides an API to observe event streams. It is the responsibility of the application to define, execute, and coordinate event handling logic.

Event handlers are essential in event-sourced architectures. They consume events and turn them into side effects: updating read models, invoking APIs, sending notifications, or emitting further events. While EventSourcingDB guarantees consistency for writing and storing events, the handling of those events is delegated to external clients.

Event Observation Instead of Subscriptions

EventSourcingDB does not use a push-based subscription model. Instead, it offers a pull-based interface via the /api/v1/observe-events endpoint. Clients connect to this endpoint and receive both historical and future events for a given subject or subject hierarchy. The connection remains open, and new events are streamed to the client as they occur.

This model gives clients full control over event handling:

  • They decide what to observe
  • They determine how and when to reconnect
  • They manage offsets, delivery guarantees, and retries

This flexibility allows clients to implement exactly the reliability and performance behavior they need — without being constrained by server-side assumptions.

Remembering the Last Processed Event

To resume processing after a network failure, restart, or deployment, a handler must know which events have already been processed. This is done by remembering the ID of the last successfully handled event. EventSourcingDB uses globally ordered, gap-free numeric event IDs, which makes resumption straightforward.

Clients can reconnect to the observe endpoint and specify a lowerBound. This tells the server to resume event delivery starting from that exact point. Alternatively, clients can start from the beginning of the stream — for example, when rebuilding a projection from scratch.

This mechanism gives clients precise control over delivery, replay, and failure recovery. It also enables different handlers to track their own positions independently.

Idempotency and Dedupe Are Client Responsibilities

EventSourcingDB delivers events reliably and in order. However, it does not enforce any delivery guarantees beyond that. There are no internal acknowledgments or retries. If a client disconnects and reconnects, it may receive duplicate events — especially if it resumes from a previously processed ID.

This means that event handlers must be idempotent: applying the same event twice should not lead to incorrect state. Alternatively, handlers may implement deduplication by keeping track of which event IDs have already been handled.

Whether to aim for at-least-once or at-most-once semantics is entirely up to the client:

  • At-least-once: Resume from a known ID, tolerate duplicates, ensure idempotency.
  • At-most-once: Track progress internally, avoid duplicates, accept the risk of missed events.

The choice depends on the handler's use case, the importance of delivery guarantees, and the nature of side effects. For example, updating a read model may tolerate multiple applications of the same event. Triggering an email or external transaction typically requires stricter guarantees.

Partitioning and Parallelization

Event handlers can observe events from individual subjects, from a subject hierarchy, or even from the root subject / with recursion enabled. This allows for flexible partitioning strategies:

  • Observe per aggregate: useful for fine-grained, parallel handling
  • Observe per domain area: useful for broader projections or integrations
  • Observe everything: useful for analytics or global event buses

Each handler can maintain its own offset and process events independently. This allows for high scalability and separation of concerns.

EventSourcingDB guarantees that events are delivered in order within a stream. If strict ordering is required across subjects, handlers must observe from a common ancestor subject. Otherwise, streams can be processed in parallel.

Keeping Event Handlers Simple and Reliable

While it is possible to build complex workflows with chaining, branching, or reactive behavior, most event handlers benefit from being simple and deterministic. A good event handler does the following:

  • Connects to one or more subjects
  • Starts from a known event ID or from the beginning
  • Processes each event exactly once (via idempotency or deduplication)
  • Tracks progress persistently
  • Reconnects automatically on failure
  • Writes to logs or metrics for observability

These principles help ensure that handlers are reliable under load, restart safely, and remain understandable over time. By keeping the event handling logic focused and isolated, systems become easier to maintain and evolve.

Event Handlers Are Not Part of the Database

It is important to note that EventSourcingDB does not include or manage any event handling infrastructure. It provides a reliable and efficient way to observe events — nothing more. Applications are responsible for implementing handlers, tracking state, and deciding how to respond to incoming events.

This separation of concerns keeps the database simple, composable, and open to different integration strategies. Whether you use Node.js, Go, Java, Python, or another platform — the logic for handling events lives in your codebase, not in the database.

EventSourcingDB enables these handlers by exposing structured, ordered, and durable events — and by providing a low-latency stream interface to consume them.