Building Event-Driven Applications¶
This guide explains how to design and implement event-driven applications using EventSourcingDB. It covers the core principles of event-driven architecture, the role of the database as a durable event source, and strategies for integrating event handling, processing, and downstream systems. While EventSourcingDB does not define how applications should be structured, it provides a solid foundation for building systems that react to change rather than poll for it.
Event-driven systems are fundamentally different from traditional request-driven architectures. Instead of pushing actions through a chain of services, applications observe events and respond when something of interest happens. This approach decouples producers and consumers, improves scalability, and aligns closely with how real-world processes work.
Events as the Source of Truth¶
At the heart of any event-driven application is the idea that events represent what has happened — not just within a system, but in the domain. These events are the single source of truth. They describe past facts, not instructions or intentions. Applications built around this concept derive state by interpreting these facts in context.
EventSourcingDB acts as the persistent store for these events. It does not trigger reactions or run workflows, but it guarantees that events are stored durably, ordered correctly, and made observable to consumers. This separation of concerns allows applications to define their own logic while relying on the database to provide a reliable event history.
Observing Events to Trigger Reactions¶
Applications interact with EventSourcingDB primarily by observing event streams. The /api/v1/observe-events API allows clients to connect to a stream, receive past events, and stay connected to receive new ones. This enables a reactive model where event handlers or services are triggered as soon as a relevant event occurs.
Different components in an application can observe different streams:
- Read model updaters can observe events related to a specific domain
- Integration services can observe cross-cutting events
- Monitoring tools can observe all events to detect anomalies or trends
Each consumer is responsible for tracking its progress and resuming observation if disconnected. This enables high resilience and control over delivery semantics.
Decoupling and Scalability¶
Event-driven applications promote a high degree of decoupling. Producers do not need to know who consumes the events. Consumers do not interfere with producers. This makes it easier to add new features, integrations, or processing logic without modifying existing components.
Scalability emerges naturally from this model. Different consumers can scale independently, process events in parallel, and apply backpressure based on their own capacity. The database remains a central hub — durable, observable, and consistent — but does not coordinate or constrain the flow of logic.
This pattern is especially powerful when combined with horizontal partitioning. Consumers can observe specific subject hierarchies, run in parallel, and manage their own offset state. EventSourcingDB guarantees ordering within streams, so consumers can make safe assumptions even in distributed environments.
Integrating with External Systems¶
One common use case for event-driven architecture is integration with external systems: payment providers, shipping services, messaging platforms, and more. In these cases, events written to EventSourcingDB can trigger outbound actions.
Since EventSourcingDB does not push events or manage delivery guarantees, the application must implement this logic. Observing events and invoking side effects is the responsibility of the consumer. This also includes handling retries, failures, idempotency, and acknowledgments where needed.
Patterns such as outbox processing, circuit breakers, and dead-letter queues can be applied at the application level to increase robustness. EventSourcingDB remains neutral and focused on event storage.
Building Read Models¶
A typical event-driven system includes read models — specialized data representations that are optimized for querying, visualization, or user interfaces. These read models are built by consuming events and projecting them into a different form.
The separation of read and write models is often referred to as CQRS (Command-Query Responsibility Segregation), but it can be applied without adopting the full CQRS pattern. In EventSourcingDB, read models are updated outside the database, based on observed events. The system does not provide built-in projections or transformation logic.
Applications are free to build their read models in whatever storage or format is appropriate: relational databases, search indexes, key-value stores, or even in-memory caches. The only requirement is that the projection logic is deterministic and replayable. This allows read models to be rebuilt at any time by replaying historical events.
Designing for Evolution¶
Event-driven applications must be able to evolve. New features, event types, or consumers should not break existing behavior. To support this, applications should:
- Use versioned event types when making breaking changes
- Design consumers to ignore unknown or irrelevant events
- Ensure projections can be rebuilt from scratch if needed
- Track processing offsets explicitly and persistently
EventSourcingDB supports these patterns by enabling filtering, replay, and deterministic access to the event history. It does not restrict how evolution is managed — this is the responsibility of the application architecture.
Application Logic Outside the Database¶
Unlike some systems that blend storage and execution, EventSourcingDB draws a clear line: it stores and delivers events, but it does not run code, invoke handlers, or maintain workflows. This clarity keeps the system focused, testable, and easy to integrate.
All application logic — whether command handling, validation, business rules, or projections — lives in the application itself. This gives developers full flexibility, allows for transparent testing, and avoids hidden coupling between components.
By combining a minimal, reliable event store with well-structured application logic, teams can build systems that are both robust and adaptable — ready to respond to change, growth, and complexity over time.