Skip to content

One Database to Rule Them All

"Should we use one EventSourcingDB for all our services, or should each service have its own?" This question comes up in almost every conversation about service-based architectures. Teams want to keep things simple. One database sounds easier to manage than five. The appeal is understandable.

But here's the thing: this isn't a question specific to EventSourcingDB. It's one of the most debated topics in software architecture, regardless of database technology. Whether you're using PostgreSQL, MongoDB, or an event store, the fundamental question remains the same: should multiple services share a database, or should each service have its own?

If you've read Tolkien, you know that "One Ring to Rule Them All" didn't end well. The same is true for shared databases. What looks like elegant simplicity becomes a source of coupling, conflict, and constraint. Let's explore why.

Why Shared Databases Look Attractive

The appeal of a shared database is easy to understand. One database means one place to manage. One backup strategy. One set of credentials. One monitoring dashboard. One operational burden. For small teams with limited resources, this sounds like the pragmatic choice.

And in the beginning, it works. Your first two services read and write to the same database. They share data effortlessly. Need customer information in the order service? Just query the customers table. No need for APIs, no need for synchronization, no need for data duplication. Fast. Convenient. Simple.

There's also the data consistency argument. When everything lives in one database, you can use transactions to ensure atomicity. Update the order and the inventory in one transaction, and either both succeed or both fail. No distributed transactions, no eventual consistency, no complexity.

The problems don't show up on day one. They show up six months later, when your system has grown, when multiple teams are working on different services, and when everyone is stepping on each other's toes.

The Problems with Shared Databases

Schema changes become coordination nightmares. When multiple services depend on the same tables, every change requires alignment across teams. Want to rename a column? Check with the order team, the billing team, and the analytics team first. Want to add a required field? Make sure every service handles it correctly before deployment. Want to remove deprecated data? Good luck figuring out who still depends on it.

What should be independent development becomes a choreographed dance. Releases slow down. Teams wait for each other. The promise of autonomous services dissolves into shared dependencies and synchronized deployments. A "simple" schema migration becomes a multi-team project that takes weeks to coordinate.

The database becomes a hidden API without a specification. Services communicate not through well-defined interfaces, but through shared tables. Service A writes a row. Service B reads it. There's no contract. No versioning. No clear ownership. No documentation of what consumers expect.

It looks like the services are decoupled because they don't call each other directly. But they're deeply coupled through the data structures they share. Change the format of a column, and consumers break silently. Add a new status value, and readers don't understand it. Remove a column you think is unused, and something fails in production. The coupling is invisible until something goes wrong.

Business rules get bypassed or duplicated. When Service A can write directly to tables that conceptually belong to Service B, it can easily violate invariants that Service B would have enforced. Or both services implement the same validation logic, slightly differently, leading to subtle inconsistencies.

The database can't enforce business rules. It stores data. It doesn't understand that orders can only be shipped after payment is confirmed, or that customer addresses must be validated before use, or that discount codes can only be applied once per customer. Only the owning service can ensure that business rules are followed. When multiple services write to the same tables, those rules get scattered, duplicated, or ignored entirely.

Ownership becomes unclear. When multiple services write to the same tables, who is the authority? When data is inconsistent, who is responsible for fixing it? When a bug causes data corruption, who investigates? When performance degrades, who optimizes?

Clear ownership is the foundation of service autonomy. A shared database erodes that foundation. Boundaries blur. Responsibilities overlap. What should be independent services become entangled contributors to a shared mess. Nobody feels responsible for the whole, and problems fall through the cracks.

Runtime interference creates unexpected failures. Services that share a database share its resources. A long-running analytical query from the reporting service can lock tables that the order service needs. A sudden spike in traffic to one service can exhaust connection pools for all others. A poorly optimized query can bring down the entire system.

You wanted isolation. You got a single point of contention. The Shared Database anti-pattern, as microservices.io calls it, creates exactly these problems.

One Database per Service

The solution is conceptually simple: each service gets its own database. A service owns its data. It stores that data however it sees fit. And it exposes that data to other services only through well-defined APIs.

This principle has a name: the database is an implementation detail. No other service should know or care what database technology you use, what your schema looks like, or how you structure your data internally. If Service B needs data from Service A, it asks Service A through an API. The database remains invisible to the outside world.

This approach has profound benefits:

  • Business rules are enforced in one place. The service that owns the data is the only one that can modify it. Every write goes through its logic, its validation, its constraints. No backdoor access. No circumvention. The rules are implemented once and applied consistently.
  • Schema changes are local. When you change your database schema, you change your service. Other services don't notice because they never saw the schema in the first place. You can refactor freely, optimize for new access patterns, or restructure entirely.
  • Technology choices are free. Each service can use the storage technology that fits its needs. The order service can use a relational database. The search service can use Elasticsearch. The cache can use Redis. No compromises, no lowest common denominator.
  • Teams move independently. No coordination required for internal changes. Each team owns their data and their release schedule. Autonomy becomes real, not just theoretical.
  • Failures are isolated. When one service's database has problems, other services continue to operate. You can scale, maintain, and troubleshoot each database independently.

The cost is that you need mechanisms for services to communicate and share information when necessary. APIs. Messages. Events. These require more upfront design than just reading from a shared table. But they make the coupling explicit, versioned, and manageable. You know exactly what other services depend on, and you can evolve those contracts deliberately.

Yes, you lose cross-service transactions. But in practice, most systems that think they need distributed transactions can be redesigned to work with eventual consistency and compensating actions. The flexibility gained is worth the trade-off. This is the Database per Service pattern, and it's fundamental to building autonomous services.

What This Means for EventSourcingDB

Everything we've discussed applies to event stores just as much as to relational databases. Perhaps even more so, because the temptation to share is even stronger.

When you use EventSourcingDB, you store events that represent what happened in your domain. These events are rich with business meaning. They capture decisions, state transitions, and domain-specific facts. OrderPlaced, PaymentReceived, ShipmentDispatched. It's tempting to think: if all services could just observe these events directly, we'd have a beautifully integrated system with no need for additional communication infrastructure.

This is a trap.

The events in your EventSourcingDB are not just data. They are domain knowledge. They encode the internal workings of your service, the structure of your aggregates, the granularity of your state changes. They reflect how you've chosen to model your domain, which might change as your understanding evolves.

When another service observes your events directly, it couples itself to all of these internal details. Every field name. Every event type. Every structural decision. Every quirk of your domain model.

Consider what happens when you need to refactor. You want to split one event into two for better granularity. You want to rename a field to match evolved domain language. You want to restructure your aggregate boundaries. If other services are observing your events directly, every internal change becomes a breaking change. You've traded one form of coupling (shared tables) for another (shared event streams). The problems are the same, just dressed in different clothes.

The EventSourcingDB is an implementation detail of your service. It should not leak to the outside world. Not through direct database access. Not through the observe endpoint. Allowing external services to observe your internal events couples them to the shape of those events, to your internal domain model, to decisions that should be yours alone to change.

This doesn't mean the observe endpoint is bad. It's essential. But it's meant for components within your service, not for external consumers.

Domain Events vs. Integration Events

This brings us to a crucial distinction: not all events are the same.

Domain events are the events you store in your EventSourcingDB. They capture what happened within your service's bounded context. They're optimized for your internal needs: rebuilding aggregate state, updating read models, triggering internal workflows. They might be fine-grained, technically detailed, or structured in ways that only make sense within your service.

Your domain events might include OrderLineItemAdded, OrderLineItemRemoved, OrderLineItemQuantityAdjusted, and a dozen other granular facts that help you reconstruct the complete state of an order. This level of detail is valuable internally. It lets you understand exactly how an order evolved over time. It lets you build projections that answer any question about order history. It lets you replay and debug.

But other services don't need this granularity. They don't care about individual line item adjustments. They care that an order was placed and is ready to be processed.

Integration events are what you publish to the outside world. They represent facts that other services need to know about, but in a form designed for external consumption. Integration events have stable structures, explicit versioning, and clear contracts. They're your public announcements, carefully crafted for your audience.

An integration event might be OrderConfirmed, summarizing everything other services need to know: the order ID, the customer, the total amount, the shipping address. It's a deliberate, curated view of what happened, designed for consumers who don't share your internal context.

The EventSourcingDB serves the inside of your service. It's the foundation for your write model and your read model. Your projections observe it. Your event handlers react to it. Components within your service use the observe endpoint to stay in sync. This is exactly what EventSourcingDB is designed for.

But when you need to communicate with other services, you don't expose your EventSourcingDB. Instead, you publish integration events through an explicit channel: a message broker, an API, a dedicated event bus. You decide what to publish, when to publish it, and in what format. You control the contract. You can change your internal domain events freely, as long as the integration events you publish remain stable.

This separation gives you the best of both worlds. Internally, you have the full power of event sourcing: complete history, replay capability, flexible projections. Externally, you have clean contracts that you can version and evolve independently of your internal implementation. The Building Event-Driven Applications guide explores this approach in depth.

Your domain events are your private journal. Your integration events are your public announcements. Keep them separate.

One EventSourcingDB per Service

So when the question comes up ("Can we just use one EventSourcingDB for everything?"), you now understand why the answer is no.

A shared EventSourcingDB reintroduces all the problems we discussed for shared databases. Services would observe each other's domain events directly. Internal event changes would affect multiple consumers. The boundaries between services would blur. Domain knowledge would leak across service boundaries. Business rules could be bypassed. Ownership would become unclear.

Each service gets its own EventSourcingDB. Each service owns its domain events. Each service uses those events internally for its write model, its read models, and its projections. And each service publishes integration events explicitly when other services need to know that something happened.

This is not a limitation. It's what makes service-based architectures work. It's what keeps teams autonomous and systems evolvable. The EventSourcingDB is a powerful foundation for each individual service. But like any database, it belongs to that service alone.

Where to Go From Here

If you're building a service-based architecture with EventSourcingDB, start by drawing clear boundaries. Identify which service owns which domain. Design the integration events that flow between services. Keep each EventSourcingDB invisible to the outside world.

If you're new to EventSourcingDB, the Getting Started guide will help you set up your first event store in minutes.

One database to rule them all? No. One database per service, with integration events to connect them. That's how you build systems that last.