This document provides a more detailed overview of the SmartHotel Task project structure, setup, and development workflows.
- Project Structure
- Prerequisites
- Installation
- Running the Application
- Event Processing
- Testing
- Environment Variables
- Linting and Formatting
- Commit Conventions
This project is a monorepo managed with npm workspaces (implicitly).
/
├── apps/
│ ├── api/ # NestJS API application
│ └── worker/ # NestJS Worker application
├── dist/ # Compiled output
├── docker/ # Docker configuration files (MongoDB init, RabbitMQ config)
├── docs/ # Project documentation (this file)
├── node_modules/ # Dependencies
├── .vscode/ # VSCode settings
├── .env.example # Example environment variables
├── .gitignore
├── .nvmrc # Node.js version specification
├── .prettierrc # Prettier configuration
├── Dockerfile-api # Dockerfile for the API app (Example, adjust if needed for production)
├── docker-compose.yml # Docker Compose configuration for external services
├── entrypoint-api.sh # Example entrypoint script for API Docker container
├── eslint.config.mjs # ESLint configuration
├── nest-cli.json # NestJS CLI configuration
├── package.json # Root package configuration and scripts
├── package-lock.json
├── README.md # Root README (Quick Start)
├── tsconfig.build.json # TypeScript build configuration
└── tsconfig.json # Base TypeScript configuration
- Node.js: Ensure you have Node.js installed. It's recommended to use a version manager like
nvmand install the version specified in the.nvmrcfile (nvm useornvm install). Currently: v22. - npm: Comes bundled with Node.js.
- Docker: Required for running external services like MongoDB and RabbitMQ. Download from Docker's website.
- Docker Compose: Usually included with Docker Desktop.
Clone the repository and install the dependencies using npm from the project root:
git clone <repository-url>
cd mysmarthotel-task
npm installThe project relies on MongoDB and RabbitMQ, which are managed via Docker Compose.
Start Services:
docker compose up -dThis command will:
- Pull the necessary images (Mongo, RabbitMQ).
- Create and start containers for
mongodbandrmqin detached mode (-d). - Run an initialization script (
mongo-init) to configure the MongoDB replica set after themongodbservice is healthy. - Expose the following ports:
- MongoDB:
27017 - RabbitMQ AMQP:
5672 - RabbitMQ Management UI:
15672(Access via http://localhost:15672, default user/pass: guest/guest)
- MongoDB:
- Mount volumes in
.volumes/to persist data between runs.
RabbitMQ Configuration:
The project uses RabbitMQ for asynchronous message processing with the following exchange and queue setup:
-
Exchanges:
x.events(fanout): Main event exchange for broadcasting eventsx.worker(topic): Worker-specific exchange for task routingx.dlq(topic): Dead Letter Queue exchange for handling failed messages
-
Exchange Bindings:
x.events→x.worker(pattern:#.event): Routes all events to worker exchangex.dlq→x.worker(pattern:dlq-publish): Routes delayed/retried messages back to worker
-
Queues:
q.worker.task:- Bound to:
x.worker - Routing keys:
task.event,dlq-publish - DLQ config: Messages are sent to
x.dlqwith routing keydlq-delay
- Bound to:
q.dlq.worker-task(Delay Queue):- Bound to:
x.dlq - Routing key:
dlq-delay - TTL: 2 minutes
- After TTL: Messages are routed back to
x.dlqwith keydlq-publish
- Bound to:
This setup implements a delayed retry mechanism:
- Failed messages from
q.worker.taskgo to the DLQ - Messages wait in the delay queue for 2 minutes
- After the delay, messages are automatically retried
Note: Implement retry count checking in your consumers to prevent infinite retry loops.
LocalStack (AWS Emulator):
The docker-compose.yml includes a service definition for LocalStack, which emulates various AWS services locally (currently configured for S3). This is used for development and testing without needing actual AWS resources.
- Starting: LocalStack starts automatically along with MongoDB and RabbitMQ when you run:
docker compose up -d
- Configuration:
- By default, only the S3 service is enabled via the
SERVICES=s3environment variable indocker-compose.yml. You can modify this variable to enable other services (e.g.,SERVICES=s3,sqs). - The main edge port
4566is exposed. Applications within the Docker network should usehttp://localstack:4566as the endpoint URL.
- By default, only the S3 service is enabled via the
- Environment Variables: The
.env.examplefile includes the necessary environment variables for connecting to LocalStack from your applications:AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY: Use the defaulttest/test.AWS_REGION: A default region likeus-east-1.AWS_ENDPOINT_URL: Set tohttp://localstack:4566.S3_BUCKET_NAME: Define the name of the S3 bucket your application will use. Remember to copy.env.exampleto.envfor local development.
Stop Services:
docker compose downStop and Remove Volumes (Clean Slate):
docker compose down -vFor development, run the API and Worker applications with hot-reloading enabled. Open two separate terminals in the project root.
Terminal 1: API
npm run start:dev:apiThis starts the NestJS API (apps/api), typically listening on port 3000 (check console output). Changes to files in apps/api/src will trigger a rebuild and restart.
Terminal 2: Worker
npm run start:dev:workerThis starts the NestJS Worker (apps/worker). It will connect to RabbitMQ and start consuming messages based on its configuration. Changes to files in apps/worker/src will trigger a rebuild and restart.
To run the applications in production mode, you first need to build them:
npm run build:api
npm run build:workerThen, start the compiled applications:
# Start API
npm run start:prod:api
# Start Worker (in a separate terminal)
npm run start:prod:workerNote: For actual production deployments, consider containerizing the applications using Dockerfiles (like the example Dockerfile-api) and managing them with an orchestrator or platform.
The apps/worker application is responsible for handling background tasks triggered by events from the API. Currently, its primary function is processing reservation data uploaded via XLSX files.
Workflow:
- Consume Event: The worker listens to the
q.worker.taskqueue on RabbitMQ fortask.created.eventmessages. - Claim Task: It attempts to claim the task by setting its status to
IN_PROGRESSatomically. - Download File: It retrieves the associated XLSX file path from the task payload and downloads the file from the configured S3 storage (using LocalStack for local development).
- Parse XLSX: The file is parsed using the
xlsxlibrary. It expects columns likereservation_id,guest_name,check_in_date,check_out_date, andstatus. - Process Reservations: Each row in the file is processed according to the following rules:
- Validation: Basic validation checks for required fields, valid dates (check-out after check-in), and valid
ReservationStatusenum values. Duplicatereservation_ids within the file are flagged as errors. - Database Interaction:
- If a reservation from the file has a status of
COMPLETEDorCANCELED:- If the reservation exists in the database, its status is updated (if different).
- If the reservation does not exist, it is not added.
- If a reservation from the file has any other status (e.g.,
PENDING):- The reservation is added to the database if it's new, or updated if it already exists (upsert operation).
- If a reservation from the file has a status of
- Row-level errors are recorded but do not stop the processing of subsequent rows.
- Validation: Basic validation checks for required fields, valid dates (check-out after check-in), and valid
- Update Task Status: After processing all rows:
- If any errors were recorded (including file-level errors like parsing issues or validation errors), the task status is set to
FAILED, and the errors are stored in the task document. - If no errors occurred, the task status is set to
COMPLETED.
- If any errors were recorded (including file-level errors like parsing issues or validation errors), the task status is set to
- Update Event Status: The corresponding
task.created.eventdocument is marked asPROCESSED. - Error Handling: Uses RabbitMQ's DLQ mechanism for retrying transient errors during message handling or transaction execution. Non-retryable errors result in the task being marked
FAILED.
The API uses a simple API key authentication scheme for this project:
- Root API Key: A single root-level API key that grants full access to all endpoints.
- Set via environment variables:
API_ROOT_API_KEY: The API key identifierAPI_ROOT_API_KEY_SECRET: The secret value for the API key
- Must be included in requests using the
X-API-Keyheader:curl -H "X-API-Key: your_api_key_here" http://localhost:3000/api/endpoint - For simplicity in this project, we're using a single root key. In a production environment, you would typically implement more granular API key management with different access levels and organization-specific keys.
- Set via environment variables:
The project implements the Outbox Pattern for asynchronous event processing. This pattern ensures that events are processed in a reliable and consistent manner, even in the event of system failures.
The project uses Jest for testing, integrated with Testcontainers for E2E tests requiring external services.
Run all unit tests (*.spec.ts) across both applications:
npm testWatch Mode:
npm run test:watchCoverage Report:
npm run test:cov(Report generated in the coverage/ directory)
E2E tests (*.e2e-spec.ts) typically live within the respective application's test/ directory and use specific Jest configurations.
Run API E2E Tests:
npm run test:e2e:api(Uses configuration from apps/api/test/jest-e2e.json)
These tests cover the full flow from API request through event publishing to worker processing and final task status updates.
Configuration is primarily managed through environment variables.
- A
.env.examplefile exists in the root directory, showing the required variables. These typically include settings for the API and Worker applications (host, port, logging), connection URLs for external services (MongoDB, RabbitMQ), and configurations for AWS services (using LocalStack for local development). - For local development, copy
.env.exampleto.envand fill in the necessary values (or use the defaults if suitable). - The applications (using
@nestjs/config) load variables from the.envfile. - Never commit your
.envfile to version control. Ensure it's listed in.gitignore.
- Formatting: Prettier is used for code formatting. Run
npm run formatto format all relevant files. - Linting: ESLint is used for code linting. Run
npm run lintto check for and fix linting errors.
It's recommended to configure your IDE to use ESLint and Prettier for automatic formatting and linting on save.
This project follows the Conventional Commits specification. Please format your commit messages accordingly . This helps maintain a clean commit history and enables potential automation (e.g., changelog generation).
Example:
feat(api): add endpoint for retrieving user bookings
fix(worker): resolve issue with message deserialization