The worker service is responsible for consuming commands and external events, applying business logic, persisting state, and publishing events that represent domain outcomes. It typically runs as a background service and does not expose HTTP endpoints. The worker project template sets up a robust foundation for building message-driven services using MassTransit with RabbitMQ for messaging, MongoDB-compatible storage, and OpenTelemetry for observability.
In practice, the worker is where you put the "real work":
- Consumes commands from the bus and treats them as the unit of work. Commands should be actionable and explicit, like
CreateOrderinstead ofOrderChanged. - Handles external events from other services and translates them into local commands when needed, so domain logic stays behind your own contracts.
- Applies domain rules and produces internal domain events as the outcome, meaning facts that happened, instead of leaking persistence concerns into handlers.
- Persists state behind a repository abstraction using MongoDB-compatible storage by default.
- Publishes shared events back onto the bus so other services (and the API) can react asynchronously.
- Idempotent by design because messages can be delivered more than once.
- Observable: traces, metrics, and logs are emitted via OpenTelemetry so you can answer "what happened to message X?".
- Resilient: transient failures should be retryable and visible. MassTransit and RabbitMQ give you the building blocks, and you decide the policy.
- Background jobs belong here too. This template shows a hosted service for cache invalidation.
The worker is where "command accepted" turns into "domain result happened".
That distinction matters across the whole stack:
- the web sends a command and gets an immediate HTTP response
- the API puts the command on the bus and returns
- the worker later consumes that command, performs the real work, and publishes the resulting event
This is the part of the flow that makes the message-driven architecture real. If you want the browser-side view of the same loop, see Web status service and domain feedback. If you want the bridge layer that sits in front of and behind the worker, see API async command loop.
In these docs, "domain event" means the internal result produced by the application/domain layer. Once that result is mapped to a shared contract and published on the bus, it becomes a published event that the API and web app can react to.
In the template, the worker side is split into two layers:
- the command handler is the messaging boundary
- the application/domain service applies business rules and persists state
The usual flow looks like this:
- a command arrives from RabbitMQ
ExampleCommandHandlerpasses it to the application service- the application service loads or creates the aggregate and applies domain rules
- the updated state is persisted
- a domain event is returned from the application layer
- the handler maps that domain event to a shared event contract
- the handler republishes it onto the bus, preserving the original correlation id
That republished event is what the API later consumes to invalidate caches and notify the web app.
On failures, the worker side is just as important. If command handling throws, the command has still been accepted into the asynchronous pipeline, but the business operation did not complete. That failure can then travel back through the fault path so the API can surface a DomainFault to the browser.
The worker project is very simple in how it works. Because it is only allowed to communicate via messages, its entry point is a set of message handlers that respond to commands and events. Each handler delegates the actual business logic to a domain service, which encapsulates the core functionality of the service. The domain results in internal domain events, which are then mapped and published by the initial message handler, creating a clean loop.
The ExampleCommandHandler.cs file contains the implementation of command handlers that process incoming commands. Each command handler is responsible for executing specific business logic when a command is received. This handler is designated to process Example domain commands. If you have more sub-domains, you should create additional command handlers following the same pattern instead of adding all command handlers to this single file.
The command handler usually follows these steps:
- Receive Command: The handler listens for specific commands from the message bus.
- Pass Command to Domain Service: The handler invokes the appropriate method on the domain service, passing along the entire command object.
- Map Domain Events: The domain service processes the command and returns a domain event object. The handler maps that domain event into a shared event contract suitable for publishing.
- Publish Events: After processing the command, the handler may publish the mapped event if the returned domain event object is not null.
Notice what the handler does not do: it does not talk to the web directly and it does not complete an HTTP request. Its responsibility is to turn bus messages into durable domain outcomes and then publish the resulting facts back out.
The ExampleService.cs file contains the core business logic for the Example domain. This service is responsible for processing commands delivered by its CommandHandler and generating internal domain events based on the business rules.
The domain service typically follows these steps:
- Receive Command: The service receives the command object from the command handler.
- Create or Hydrate Aggregate: The service either creates a new aggregate instance or retrieves an existing one from the repository, depending on the command.
- Apply Business Logic: The service invokes methods on the aggregate to apply business rules and modify its state.
- Persist Changes: The service saves the updated aggregate back to the repository.
- Return Domain Events: The service returns any domain events that were generated as a result of processing the command.
Returning the domain event instead of pushing it directly from deep inside the domain layer keeps the responsibilities tidy: domain code decides what happened, while the outer consumer decides how that fact is published to the rest of the system.
Note that the default persistence mechanism uses MongoDB-compatible storage; in the generated Kubernetes setup that typically means FerretDB, but you can replace it with any other database by implementing the repository interface accordingly. For event-sourcing scenarios, you can use my MongoEventStore library as a starting point. In this case you would store the domain event object returned by the aggregate instead of persisting the aggregate state directly.
The ExternalEventHandler.cs file contains the implementation of event handlers that process incoming external events from other domains. Each event handler is responsible for executing specific business logic when an external event is received. This handler is designated to process external messages from the Other.Worker.Contracts.Commands namespace. An example would be handling a UserCreatedEvent from an identity service to create a corresponding local user profile. Or in our case we capture some remote code and link it to our Example aggregate.