Skip to content

Local Development Setup: EventSourcingDB in 5 Minutes

Want to try EventSourcingDB right now? Here's all you need:

docker run -it -p 3000:3000 \
  thenativeweb/eventsourcingdb run \
  --api-token=secret \
  --data-directory-temporary \
  --http-enabled \
  --https-enabled=false \
  --with-ui

Done! EventSourcingDB is running at http://localhost:3000.

For everyone who wants to know what's actually happening and how to write their first event, read on.

Why Try EventSourcingDB?

Curious about Event Sourcing? Maybe you've heard about the benefits – complete audit trails, time travel debugging, or deriving multiple read models from a single source of truth – but setting up an event store has always seemed like a lot of work.

That's where EventSourcingDB comes in. It's a purpose-built database designed specifically for event sourcing. No adapting general-purpose databases, no complex configurations, no hours of setup. Just one Docker command and you're running.

In this post, you'll start EventSourcingDB locally in seconds, understand what each configuration flag does, write a simple Node.js program that stores and retrieves events, and experience event sourcing hands-on. Let's get started.

Prerequisites

Before we begin, make sure you have Docker installed (any recent version will do) and a current version of Node.js, such as the latest LTS. At the time of writing, that's Node.js 24. Optionally, having a basic understanding of event sourcing helps, though it's not required. You can check out the official docs if you're new to the concept. Oh, and set aside 5 minutes of your time.

Understanding the Docker Command

Let's break down that one-liner from the beginning and understand what each part does:

docker run -it -p 3000:3000 \
  thenativeweb/eventsourcingdb run \
  --api-token=secret \
  --data-directory-temporary \
  --http-enabled \
  --https-enabled=false \
  --with-ui

The command starts with standard Docker options. The docker run creates and starts a new container, while -it runs it in interactive mode with a terminal attached so you can stop it with Ctrl+C. The -p 3000:3000 flag maps port 3000 from your host machine to port 3000 inside the container.

Next comes thenativeweb/eventsourcingdb, which is the official Docker image. Since we're not specifying a version tag, it pulls latest. For production use, you should always pin a specific version like :1.0.0, but for local experimentation, latest is fine.

The run command tells EventSourcingDB to start in server mode. Now for the interesting part – the configuration flags.

The --api-token=secret flag sets the Bearer token for API authentication. Every request to EventSourcingDB needs to include this token in the Authorization header. For local development, secret is fine and easy to remember. For production, you'd use a strong, randomly generated token.

The --data-directory-temporary flag creates a temporary directory for data storage. When you shut down the database, this directory is automatically deleted. This makes it perfect for testing and evaluation, local development, and experimenting without worrying about cleanup. Everything disappears when you stop the container, with no leftover files and no manual cleanup needed.

If you want your events to survive container restarts, use --data-directory with a permanent path and mount a volume instead:

docker run -it -p 3000:3000 \
  -v $(pwd)/data:/var/lib/esdb \
  thenativeweb/eventsourcingdb run \
  --api-token=secret \
  --data-directory=/var/lib/esdb \
  --http-enabled \
  --https-enabled=false \
  --with-ui

The --http-enabled flag enables plain HTTP access, which simplifies local development since you don't need to deal with certificates. Note that you should only use this for local development – production environments should always use HTTPS.

Similarly, --https-enabled=false explicitly disables HTTPS to keep the local setup simple. HTTPS is enabled by default in EventSourcingDB, so this flag turns it off for development purposes. Again, in production you should always use HTTPS, which is the default.

Finally, --with-ui starts the management UI alongside the database. You can access it at http://localhost:3000 and use it to browse event streams visually, inspect individual events, monitor database health, and explore your data without writing code.

Verify It's Running

Before we write any code, let's make sure everything works. Open a terminal and run:

curl -i http://localhost:3000/api/v1/ping \
  -H "authorization: Bearer secret"

You should see a response like this:

{
  "specversion": "1.0",
  "id": "0",
  "time": "2025-11-03T14:30:00.000Z",
  "source": "https://www.eventsourcingdb.io",
  "subject": "/api/v1/ping",
  "type": "io.eventsourcingdb.api.ping-received",
  "datacontenttype": "application/json",
  "data": {
    "message": "Oh my God, it's full of stars."
  }
}

If you see this, congratulations! EventSourcingDB is up and running. You can also open http://localhost:3000 in your browser to see the EventSourcingDB management dashboard. Right now it's empty, but soon we'll add some events and see them appear here.

Set Up the Node.js Client

Now let's write some code. We'll create a simple Node.js program that connects to EventSourcingDB, writes an event, and reads it back. Start by creating a new project directory and installing the EventSourcingDB client:

mkdir eventsourcingdb-demo
cd eventsourcingdb-demo
npm install eventsourcingdb

Next, create a file called package.json with the following content:

{
  "name": "eventsourcingdb-demo",
  "version": "1.0.0",
  "type": "module"
}

The "type": "module" setting is important because it enables ES modules in Node.js, allowing us to use modern import syntax and top-level await.

Write Your First Event

Create a file called index.js with the following code:

import { Client } from 'eventsourcingdb';

const url = new URL('http://localhost:3000');
const apiToken = 'secret';

const client = new Client(url, apiToken);

// Connecting
console.log('🔌 Testing connection...');
await client.ping();
console.log('✅ EventSourcingDB is reachable!');

// Writing events
console.log('✍️  Writing first event...');
await client.writeEvents([{
  source: 'https://library.eventsourcingdb.io',
  subject: '/books/42',
  type: 'io.eventsourcingdb.library.book-acquired',
  data: {
    title: '2001 – A Space Odyssey',
    author: 'Arthur C. Clarke',
    isbn: '978-0756906788'
  }
}]);
console.log('✅ Event written!');

// Reading events
console.log('📖 Reading events from /books/42...');
for await (const event of client.readEvents('/books/42', {
  recursive: false
})) {
  console.log(`📦 Event: ${event.type} |`, event.data);
}

Now run it with node index.js. You should see output like this:

🔌 Testing connection...
✅ EventSourcingDB is reachable!

✍️  Writing first event...
✅ Event written!

📖 Reading events from /books/42...
📦 Event: io.eventsourcingdb.library.book-acquired | { title: '2001 – A Space Odyssey', ... }

So what just happened? The program connected to EventSourcingDB using the client SDK with the URL and API token, tested the connection with a ping to make sure the database is reachable, wrote an event using writeEvents(), and read the event back using readEvents(), which returns an async iterator that streams events. EventSourcingDB automatically added metadata like id, time, specversion, and more.

Notice how clean the code is? Thanks to Node.js 24's top-level await support, we don't need wrapper functions or .catch() handlers at the module level. The code is straightforward and reads naturally from top to bottom.

Understanding the Key Concepts

Now that you've written your first event, let's understand the core concepts. The subject is like a stream name or aggregate ID that groups related events together. Think of it as a namespace for events that belong to the same entity. Examples include /books/42 for all events for book 42, /orders/23 for all events for order 23, or /users/0d78d0d9-78e6-4b3b-aca2-aaddb278a0ef for all events for a specific user. Subjects can be hierarchical, like /books/42/pages/15, which helps organize events logically.

The type describes what happened, using reverse domain notation. This ensures uniqueness and makes the event's purpose immediately clear. The format is domain.event-name, like io.eventsourcingdb.library.book-acquired or com.example.user-registered. Always use past tense (registered, not register), be specific and meaningful, and avoid generic names like changed or updated.

The data field contains your business information as JSON. This is where you put everything that describes what happened. Keep it focused on what changed, use meaningful property names, and remember that any valid JSON structure works. Don't include technical metadata because EventSourcingDB handles that automatically.

EventSourcingDB follows the CloudEvents 1.0 specification, which means you get a consistent structure across all events, interoperability with other CloudEvents-compatible systems, standard metadata fields automatically added, and no need to reinvent event formats. Fields like id, time, specversion, and datacontenttype are automatically generated and managed by EventSourcingDB.

Next Steps

Now that you've got the basics down, here are some features to explore next.

You can write multiple events atomically by passing an array to writeEvents(). EventSourcingDB treats all events in a single call as an atomic transaction, meaning either all events are written or none are. This is perfect for maintaining consistency:

await client.writeEvents([
  { source: '...', subject: '/books/42', type: 'io.eventsourcingdb.library.book-acquired', data: {...} },
  { source: '...', subject: '/books/42', type: 'io.eventsourcingdb.library.book-added-to-catalog', data: {...} }
]);

For handling concurrent operations safely, use preconditions. The isSubjectPristine precondition ensures that no events exist for a subject yet. If events already exist, the write fails with a 409 Conflict response:

await client.writeEvents([
  // ...
], [
  isSubjectPristine('/books/42')
]);

The observeEvents() method is powerful for real-time processing. First it replays all existing events, then keeps the connection open so new events are streamed to you as they arrive. This is perfect for building real-time projections or triggering side effects:

for await (const event of client.observeEvents('/', {
  recursive: true
})) {
  console.log(`📦 Event: ${event.type} |`, event.data);
}

EventSourcingDB also includes EventQL, a SQL-like query language designed specifically for events. You can filter, project, and aggregate event data efficiently. And don't forget about the Management UI at http://localhost:3000, where you can browse all event streams, inspect individual events, see metadata and hashes, monitor system health, and visualize your event architecture.

For more details, check out the Official Documentation and the JavaScript SDK on npm.

Troubleshooting Common Issues

If port 3000 is already taken, use a different port by changing -p 3000:3000 to -p 3001:3000 in the Docker command, then update your client code to use new URL('http://localhost:3001').

If you get authentication errors, make sure the --api-token value in your Docker command matches the apiToken in your client code. The client automatically adds it as a Bearer token in the Authorization header.

If the container stops immediately, check the logs with docker ps -a to find your container ID, then docker logs <container-id>. A common issue is another process using port 3000. The solution is to either change the port or stop the conflicting process.

If events disappear after restart, that's expected behavior with --data-directory-temporary. The temporary directory is deleted when the container stops. For persistent storage, use --data-directory with a volume mount as shown earlier.

If you get connection refused errors, check whether the container is running with docker ps, whether Docker Desktop is running (on macOS/Windows), and whether your firewall is blocking port 3000.

Conclusion

In just 5 minutes, you started EventSourcingDB with a single Docker command, understood what each configuration flag does, wrote a simple Node.js program using modern JavaScript, wrote your first event to the database, read events back and saw the CloudEvents structure, and experienced event sourcing hands-on.

EventSourcingDB stands out because it's purpose-built for event sourcing. Unlike general-purpose databases adapted to store events, EventSourcingDB was designed from the ground up for this purpose, which means better performance, clearer semantics, and features that actually match how event-sourced systems work. It provides a clean HTTP API with no special protocols or custom drivers – just HTTP with JSON. You can use any programming language and any HTTP client. The JavaScript SDK we used is a thin wrapper around the HTTP API.

Full compatibility with the CloudEvents specification means your events are portable and interoperable with other systems. The API is small and focused, with no unnecessary complexity. You can be productive in minutes, not days. And with up to 25,000 events free (including commercial use), it's perfect for trying it out, prototyping, or building small applications.

Once you're comfortable with the basics, explore production setup with HTTPS and persistent storage, clustering for high availability, integration patterns for microservices, advanced querying with EventQL, snapshots for performance optimization, and event versioning strategies.

Happy Event Sourcing! 🚀