Skip to content

Testing Event-Sourced Systems

This guide explains how to test systems that use EventSourcingDB. It covers unit tests for application logic, integration tests involving the database, and practical strategies for managing test data and verifying event-driven behavior. Since EventSourcingDB is focused solely on storing and retrieving events, testing primarily concerns the application layer — and how it interacts with the database.

Event-sourced systems require a shift in mindset when it comes to testing. Instead of verifying the result of a state mutation, the focus is on verifying that the correct events were emitted — or that a given stream of events leads to a specific outcome. This allows for a clear separation between decision logic and persistence.

Unit Testing with Given-When-Then

At the core of an event-sourced system is application logic that processes events and emits new ones. This logic can be tested without involving any infrastructure. The most common pattern is Given-When-Then:

  • Given a history of past events
  • When a command or action is applied
  • Then expect a specific set of resulting events

This approach makes tests readable, focused, and deterministic. It also reflects how event-sourced systems operate in production: decisions are based on history, not on mutable state.

For example, given a stream of events that shows a book was acquired and borrowed, attempting to borrow it again might raise an error — or emit a rejection event. The test would specify the initial events, apply the action, and assert that the result matches the expected domain behavior.

Because EventSourcingDB does not evaluate business logic, these tests are entirely independent of the database. They focus on the correctness of aggregates, command handlers, or domain services, and are typically fast and isolated.

Integration Testing with EventSourcingDB

While unit tests validate decision logic, integration tests verify the interaction between the application and the database. This includes:

  • Writing events to EventSourcingDB
  • Reading them back to verify persistence
  • Observing events through the streaming API
  • Asserting on ordering, structure, and schema compliance

Thanks to its short startup and shutdown times, EventSourcingDB is well suited for automated tests. It can be started and stopped before and after each test, ensuring isolation and reproducibility.

There are two typical options:

  • Using Docker: Launch EventSourcingDB in a container for each test or test suite, then remove it afterwards.
  • Using the pre-built binary: Start the database as a subprocess with the --data-directory-temporary flag, which creates an isolated in-memory or temporary file store.

Both approaches allow for a clean database state in each test, without leftover data or side effects. Since EventSourcingDB requires no external dependencies or brokers, it integrates well into test environments of any scale.

Tests can interact with the API via HTTP or using one of the official SDKs. They can verify that written events are persisted correctly, that schema validation is enforced, and that event observation works as expected — including reconnection and continuation behavior.

Verifying Replays and Projections

One of the strengths of event sourcing is the ability to replay events. Tests can take advantage of this by verifying that a projection or read model can be rebuilt from scratch.

This typically involves the following steps:

  1. Write a series of domain events
  2. Run the projection logic against them
  3. Assert that the resulting state matches expectations

These tests are useful for ensuring that projections remain consistent with event history, and that they continue to work correctly as new event types or versions are introduced.

Replay-based tests can also be used to simulate system upgrades, perform migrations, or validate backward compatibility.

Ensuring Idempotency and Safe Reprocessing

Event handlers that consume events from EventSourcingDB should be idempotent — meaning they can safely process the same event more than once. Integration tests can verify this behavior by applying the same event twice and asserting that the side effect is not duplicated.

Depending on the system's architecture, tests may also verify:

  • Deduplication based on event ID
  • Persistence of last-processed event ID
  • Correct resumption after a simulated crash or disconnect

These tests help ensure that event handling remains reliable and consistent under real-world conditions.

Managing Test Data

Because events are immutable and event stores accumulate history, test data management is especially important. Each test should operate on a clean database instance or use unique subject paths to avoid collisions.

Strategies include:

  • Generating random subject identifiers per test
  • Using isolated event types for test data
  • Cleaning up after tests by deleting the temporary data directory
  • Relying on process isolation to ensure clean state

EventSourcingDB does not require complex schema setup, migrations, or seed data — which simplifies test preparation. Test cases can define all necessary context through events alone.

Testing in Event-Sourced Systems Is Self-Describing

Unlike traditional systems where state must be inspected or reverse-engineered, event-sourced systems offer transparency by design. Every action is recorded, and every outcome can be traced to its cause. This makes tests easier to write, reason about, and maintain.

Instead of verifying implementation details, tests focus on behavior and outcomes. The event log becomes both the test input and the assertion target. Combined with a reliable store like EventSourcingDB, this leads to systems that are testable by default — not just through extra effort.