Skip to content

Commit 23d02a6

Browse files
committed
Merge branch 'develop'
2 parents 8641fd5 + 13ce184 commit 23d02a6

23 files changed

Lines changed: 462 additions & 191 deletions

CHANGELOG.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,9 @@ The script does not create tags locally.
1010
- Range: `v0.36.4..HEAD`
1111
- Included commits: 13
1212

13-
### Features (4)
13+
### Features (2)
1414

1515
- [atlas-dashboard] rename AI layer and add n8n card (`619c080`)
16-
- [atlas-dashboard] add Plane Penpot and Nextcloud AIO core cards (`1f7e6be`)
17-
- [core] add Plane Penpot and Nextcloud AIO services (`d370dc7`)
1816
- [core] bootstrap Plane and Penpot default admins (`676a690`)
1917

2018
### Fix (6)

README.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@
77
![CLI](https://img.shields.io/badge/CLI-Node.js%20%2B%20TypeScript-3C873A?logo=nodedotjs&logoColor=white)
88
![Dashboard](https://img.shields.io/badge/UI-Atlas%20Dashboard-0F172A?logo=antdesign&logoColor=white)
99
![Security](https://img.shields.io/badge/Ingress-HTTPS%20Only-0F766E)
10-
![Profiles](https://img.shields.io/badge/Layers-core%20%7C%20ai--agents%20%7C%20ai--llm%20%7C%20ai--image%20%7C%20ai--video%20%7C%20workbench-7C3AED)
10+
![Profiles](https://img.shields.io/badge/Layers-core%20%7C%20ai--llm%20%7C%20workbench-7C3AED)
1111
![Persistence](https://img.shields.io/badge/Persistence-Docker%20Volumes-CA8A04)
1212

1313
> 🧭 **Atlas Lab** is a localhost-first self-hosted platform made of a Node.js/TypeScript CLI, a layered Docker Compose stack, and an operational React dashboard served by the gateway.
14-
> It is designed to provide Git hosting, optional automation agents, optional local AI LLM services, optional AI image and video generation, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
14+
> It is designed to provide Git hosting, optional local AI services with Open WebUI, Ollama, and n8n, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
1515
1616
---
1717

@@ -22,8 +22,8 @@ Atlas Lab is built for a practical goal: run a repeatable local engineering plat
2222
### What it gives you
2323

2424
- 🧱 An always-on **core layer** with Gitea, the gateway, and Atlas Dashboard
25-
- 🧠 An optional **AI LLM layer** with Open WebUI and Ollama
26-
- 🛠️ An optional **workbench layer** with browser-based Node, Python, AI, and C++ environments plus shared PostgreSQL
25+
- 🧠 An optional **AI LLM layer** with Open WebUI, Ollama, and n8n
26+
- 🛠️ An optional **workbench layer** with browser-based Node and Python environments plus shared PostgreSQL
2727
- 🔐 HTTPS-only ingress on `localhost`
2828
- 📦 A self-contained npm package that can run without a local repository checkout
2929
- 💾 Persistent state stored in named Docker volumes
@@ -67,7 +67,7 @@ Atlas Lab is split into **three explicit layers**:
6767
| Layer | Status | Includes | Purpose |
6868
| --- | --- | --- | --- |
6969
| `core` | always on | gateway, Atlas Dashboard, Gitea, Gitea DB | baseline platform |
70-
| `ai-llm` | optional | Open WebUI, Ollama, AI LLM gateway | local LLM workflows |
70+
| `ai-llm` | optional | Open WebUI, Ollama, n8n, AI gateway | local AI workflows and automation |
7171
| `workbench` | optional | Node Forge, Python Grid, shared PostgreSQL, workbench gateway | browser-based development |
7272

7373
### Why the current topology
@@ -95,6 +95,7 @@ The CLI:
9595
- runs host preflight checks
9696
- reconciles runtime state
9797
- bootstraps Gitea
98+
- aligns the n8n owner bootstrap account when the AI LLM layer is enabled
9899
- reconciles Ollama only when the AI LLM layer is enabled
99100
- cleans up legacy runtime artifacts
100101

@@ -111,6 +112,7 @@ The only host-level TCP service exposed directly is PostgreSQL from the workbenc
111112
| Gitea | `core` | `https://localhost:8444/` | Git forge, issues, reviews |
112113
| Open WebUI | `ai-llm` | `https://localhost:8446/` | only with `--with-ai-llm` |
113114
| Ollama | `ai-llm` | `https://localhost:8447/` | HTTPS API |
115+
| n8n | `ai-llm` | `https://localhost:8453/` | workflow automation and agent orchestration |
114116
| Node Forge | `workbench` | `https://localhost:8450/` | Node / TypeScript workspace |
115117
| Python Grid | `workbench` | `https://localhost:8451/` | Python workspace |
116118
| PostgreSQL | `workbench` | `localhost:15432` | host-side desktop access |
@@ -129,7 +131,7 @@ The only host-level TCP service exposed directly is PostgreSQL from the workbenc
129131
| --- | --- | --- |
130132
| `edge-net` | exposed | published ingress ports |
131133
| `apps-net` | internal | Gitea and shared browser-facing services |
132-
| `ai-llm-net` | internal | Open WebUI and Ollama |
134+
| `ai-llm-net` | internal | Open WebUI, Ollama, and n8n |
133135
| `data-net` | internal | data services and infrastructure databases |
134136
| `workbench-net` | internal | workbenches and PostgreSQL |
135137
| `workbench-host-net` | bridge | host-side PostgreSQL bind |
@@ -157,6 +159,7 @@ Key volumes include:
157159
- `gitea-data`
158160
- `gitea-db`
159161
- `ollama-data`
162+
- `n8n-data`
160163
- `open-webui-data`
161164
- `postgres-dev-data`
162165
- workbench home/workspace volumes for Node, Python, AI, and C++
@@ -175,7 +178,7 @@ Recreating containers does not wipe state. Removing the volumes does.
175178

176179
### AI requirements
177180

178-
The AI LLM and AI image layers require:
181+
The AI LLM layer requires:
179182

180183
- an `NVIDIA` GPU
181184
- a working `nvidia-smi` on the host
@@ -194,6 +197,7 @@ The AI LLM and AI image layers require:
194197
- `8444`
195198
- `8446`
196199
- `8447`
200+
- `8453`
197201
- `8450`
198202
- `8451`
199203
- `15432` when `workbench` is enabled
@@ -231,12 +235,13 @@ Key variables include:
231235

232236
- `APP_VERSION`
233237
- `LAB_HTTPS_PORT`, `GITEA_HTTPS_PORT`
234-
- `OPENWEBUI_HTTPS_PORT`, `OLLAMA_HTTPS_PORT`
238+
- `OPENWEBUI_HTTPS_PORT`, `OLLAMA_HTTPS_PORT`, `N8N_HTTPS_PORT`
235239
- `NODE_DEV_HTTPS_PORT`, `PYTHON_DEV_HTTPS_PORT`
236240
- `POSTGRES_DEV_HOST_PORT`
237241
- `OLLAMA_CHAT_MODEL`, `OLLAMA_EMBEDDING_MODEL`, `OLLAMA_RUNTIME_MODELS`
238242
- `GITEA_ROOT_USERNAME`, `GITEA_ROOT_PASSWORD`
239243
- `OPENWEBUI_ROOT_EMAIL`, `OPENWEBUI_ROOT_PASSWORD`
244+
- `N8N_ROOT_EMAIL`, `N8N_ROOT_PASSWORD`
240245

241246
Rule of thumb:
242247

@@ -332,7 +337,7 @@ npm run dev -- down
332337
| `atlas-lab up --with-workbench` | adds the workbench layer |
333338
| `atlas-lab up --with-ai-llm --with-workbench` | starts the full lab |
334339
| `atlas-lab bootstrap` | reruns core bootstrap |
335-
| `atlas-lab bootstrap --with-ai-llm` | reruns bootstrap and Ollama reconciliation |
340+
| `atlas-lab bootstrap --with-ai-llm` | reruns bootstrap, n8n owner alignment, and Ollama reconciliation |
336341
| `atlas-lab doctor` | runs host and configuration checks |
337342
| `atlas-lab doctor --smoke` | adds smoke tests for the core layer |
338343
| `atlas-lab doctor --with-ai-llm --smoke` | adds smoke tests for the AI LLM layer |
@@ -419,7 +424,7 @@ npm run dev -- save-volumes --with-ai-llm --with-workbench
419424
npm run dev -- restore-volumes --input .\backups\volumes\atlas-lab-volumes.tar.gz
420425
```
421426

422-
Bootstrap is idempotent and reconciles Gitea plus Ollama when `ai-llm` is enabled.
427+
Bootstrap is idempotent and reconciles Gitea, n8n, and Ollama when `ai-llm` is enabled.
423428

424429
---
425430

@@ -433,6 +438,7 @@ Bootstrap is idempotent and reconciles Gitea plus Ollama when `ai-llm` is enable
433438
| Gitea | `https://localhost:8444/` | `root / RootGitea!2026` |
434439
| Open WebUI | `https://localhost:8446/` | `root@openwebui.local / RootOpenWebUI!2026` |
435440
| Ollama | `https://localhost:8447/` | gateway basic auth `root / RootOllama!2026` |
441+
| n8n | `https://localhost:8453/` | owner bootstrap `root@n8n.local / RootN8NApp!2026` |
436442
| PostgreSQL host-side | `localhost:15432` | `postgres / RootPostgresDev!2026` |
437443

438444
For DBeaver and other desktop PostgreSQL clients:

apps/atlas-dashboard/src/locales/en.json

Lines changed: 8 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,6 @@
111111
"rootName": "root name",
112112
"rootPassword": "root password",
113113
"rootUser": "root user",
114-
"setupUrl": "setup URL",
115114
"superuser": "superuser",
116115
"wanModel": "wan model",
117116
"usage": "usage"
@@ -123,11 +122,9 @@
123122
"designCollaboration": "design collaboration",
124123
"directAppOnboarding": "direct app onboarding",
125124
"directAppLogin": "direct app login",
126-
"guidedSetup": "guided setup",
127125
"imageStudio": "image studio",
128126
"localInferenceApi": "local inference API",
129127
"password": "password",
130-
"privateCloud": "private cloud",
131128
"projectHub": "project hub",
132129
"protectedApi": "protected API",
133130
"sharedDatabase": "shared database",
@@ -136,10 +133,10 @@
136133
"alwaysOnForge": "always-on forge"
137134
},
138135
"dashboard": {
139-
"coreLayerSummary": "Gitea, Plane, Penpot, and Nextcloud AIO form the always-on core plane of the lab.",
136+
"coreLayerSummary": "Gitea, Plane, and Penpot form the always-on core plane of the lab.",
140137
"accessNotes": {
141138
"aiDisabled": "The AI layer no longer starts by default: the deck marks it as optional instead of pretending that it is online.",
142-
"aiEnabled": "Open WebUI and Ollama are really online and reachable on the AI gateway ports.",
139+
"aiEnabled": "Open WebUI, Ollama, and n8n are online and reachable on the dedicated AI gateway ports.",
143140
"credentials": "Operational credentials are exposed here and remain aligned with the lab bootstrap.",
144141
"https": "All browser ingresses use localhost with dedicated HTTPS, without custom DNS or hosts-file edits.",
145142
"workbenchDisabled": "Workbench and Postgres stay separated from the core operating plane until you enable the dedicated layer.",
@@ -148,18 +145,19 @@
148145
"aiLayer": {
149146
"capabilities": {
150147
"llmModels": "GPU-backed LLM models",
148+
"n8n": "Local n8n automation",
151149
"ollama": "Protected Ollama API",
152150
"openWebUi": "Local Open WebUI"
153151
},
154-
"description": "Optional AI layer for local conversational workflows and GPU-backed LLM inference. The deck enables it only when you explicitly request it.",
152+
"description": "Optional AI layer for local conversational workflows, workflow orchestration, and GPU-backed LLM inference. The deck enables it only when you explicitly request it.",
155153
"summaryDisabled": "The AI layer is off. No AI service is started or exposed until you enable the dedicated flag.",
156-
"summaryEnabled": "Open WebUI and Ollama are active and served through the AI gateway.",
154+
"summaryEnabled": "Open WebUI, Ollama, and n8n are active and served through the AI gateway.",
157155
"title": "AI"
158156
},
159157
"aiServices": {
160158
"n8n": {
161159
"action": "Open n8n",
162-
"description": "Workflow automation platform for orchestrating integrations, agents, and AI flows, even outside the lab's local runtime.",
160+
"description": "Local workflow automation platform for orchestrating integrations, agents, and AI flows, with the bootstrap owner account aligned to the lab runtime.",
163161
"title": "n8n"
164162
},
165163
"ollama": {
@@ -187,7 +185,7 @@
187185
"label": "segmentation"
188186
},
189187
"usage": {
190-
"body": "Gitea, Plane, Penpot, and Nextcloud AIO stay on as the core plane; AI, AI image, AI video, and workbench layers are enabled only when they are actually needed.",
188+
"body": "Gitea, Plane, and Penpot stay on as the core plane; the AI and workbench layers are enabled only when they are actually needed.",
191189
"label": "usage"
192190
}
193191
},
@@ -209,7 +207,7 @@
209207
"networkMapDescription": "Read the lab topology and the published network planes.",
210208
"networkMapLabel": "Network map"
211209
},
212-
"summary": "Unified control room for repository work, project coordination, design collaboration, private cloud access, optional AI tooling, and development environments. Browser ports stay on HTTPS over localhost, while Postgres from the workbench layer also exposes a host-side TCP port.",
210+
"summary": "Unified control room for repository work, project coordination, design collaboration, optional AI tooling, and development environments. Browser ports stay on HTTPS over localhost, while Postgres from the workbench layer also exposes a host-side TCP port.",
213211
"titleLines": {
214212
"first": "LAB",
215213
"second": "ATLAS"
@@ -255,13 +253,6 @@
255253
"description": "Internal Git forge for repositories, issues, review, and the lab's technical collaboration flow.",
256254
"title": "Gitea Forge"
257255
},
258-
"nextcloudAio": {
259-
"action": "Open setup UI",
260-
"description": "Self-hosted Nextcloud All-in-One stack routed through the lab gateway. It exposes the application on its dedicated URL once the guided AIO setup has completed.",
261-
"note": "Use the setup UI first. When the Nextcloud application asks for the initial admin account, use the credentials listed here. After AIO provisions the application containers, the main app URL becomes available on the dedicated gateway port.",
262-
"title": "Nextcloud All-in-One",
263-
"usage": "self-hosted deployment"
264-
},
265256
"penpot": {
266257
"action": "Open Penpot",
267258
"description": "Self-hosted Penpot workspace for product design, shared libraries, and collaboration across design and code, with a bootstrap root profile aligned to the lab.",

apps/atlas-dashboard/src/locales/it.json

Lines changed: 8 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,6 @@
111111
"rootName": "nome root",
112112
"rootPassword": "password root",
113113
"rootUser": "utente root",
114-
"setupUrl": "URL setup",
115114
"superuser": "superuser",
116115
"wanModel": "modello wan",
117116
"usage": "uso"
@@ -123,11 +122,9 @@
123122
"designCollaboration": "collaborazione design",
124123
"directAppOnboarding": "onboarding diretto in app",
125124
"directAppLogin": "login diretto applicazione",
126-
"guidedSetup": "setup guidato",
127125
"imageStudio": "studio immagini",
128126
"localInferenceApi": "API di inference locale",
129127
"password": "password",
130-
"privateCloud": "cloud privato",
131128
"projectHub": "hub progetti",
132129
"protectedApi": "API protetta",
133130
"sharedDatabase": "database condiviso",
@@ -136,10 +133,10 @@
136133
"alwaysOnForge": "forge sempre accesa"
137134
},
138135
"dashboard": {
139-
"coreLayerSummary": "Gitea, Plane, Penpot e Nextcloud AIO formano il piano core sempre acceso del lab.",
136+
"coreLayerSummary": "Gitea, Plane e Penpot formano il piano core sempre acceso del lab.",
140137
"accessNotes": {
141138
"aiDisabled": "Il layer AI non viene piu acceso di default: il deck lo marca come opzionale invece di fingere che sia online.",
142-
"aiEnabled": "Open WebUI e Ollama sono realmente online e raggiungibili sulle porte AI del gateway.",
139+
"aiEnabled": "Open WebUI, Ollama e n8n sono online e raggiungibili sulle porte dedicate del gateway AI.",
143140
"credentials": "Le credenziali operative sono esposte qui e restano allineate al bootstrap del lab.",
144141
"https": "Tutti gli ingressi browser usano localhost con HTTPS dedicato, senza DNS custom o file hosts.",
145142
"workbenchDisabled": "Workbench e Postgres restano separati dal core operativo finche non abiliti il layer dedicato.",
@@ -148,18 +145,19 @@
148145
"aiLayer": {
149146
"capabilities": {
150147
"llmModels": "modelli LLM GPU-backed",
148+
"n8n": "automazione n8n locale",
151149
"ollama": "API Ollama protetta",
152150
"openWebUi": "Open WebUI locale"
153151
},
154-
"description": "Layer AI opzionale per console conversazionale locale e inference LLM GPU-backed. Il deck lo attiva solo quando lo chiedi esplicitamente.",
152+
"description": "Layer AI opzionale per console conversazionale locale, orchestrazione workflow e inference LLM GPU-backed. Il deck lo attiva solo quando lo chiedi esplicitamente.",
155153
"summaryDisabled": "Layer AI spento. Nessun servizio AI viene avviato o esposto finche non abiliti il flag dedicato.",
156-
"summaryEnabled": "Open WebUI e Ollama sono attivi e serviti dal gateway AI.",
154+
"summaryEnabled": "Open WebUI, Ollama e n8n sono attivi e serviti dal gateway AI.",
157155
"title": "AI"
158156
},
159157
"aiServices": {
160158
"n8n": {
161159
"action": "Apri n8n",
162-
"description": "Piattaforma di automazione workflow utile per orchestrare integrazioni, agenti e flussi AI anche fuori dal runtime locale del lab.",
160+
"description": "Piattaforma locale di automazione workflow per orchestrare integrazioni, agenti e flussi AI, con owner bootstrap allineato al runtime del lab.",
163161
"title": "n8n"
164162
},
165163
"ollama": {
@@ -187,7 +185,7 @@
187185
"label": "segmentazione"
188186
},
189187
"usage": {
190-
"body": "Gitea, Plane, Penpot e Nextcloud AIO restano sempre attivi nel piano core; AI, AI image, AI video e workbench vengono abilitati a layer solo quando servono davvero.",
188+
"body": "Gitea, Plane e Penpot restano sempre attivi nel piano core; i layer AI e workbench vengono abilitati solo quando servono davvero.",
191189
"label": "uso"
192190
}
193191
},
@@ -209,7 +207,7 @@
209207
"networkMapDescription": "Leggi la topologia del lab e i piani di rete pubblicati.",
210208
"networkMapLabel": "Network map"
211209
},
212-
"summary": "Control room unificata per repository, coordinamento progetti, collaborazione design, cloud privato, strumenti AI opzionali e ambienti di sviluppo. Le porte browser restano HTTPS su localhost, mentre Postgres del layer workbench espone anche una porta TCP host-side.",
210+
"summary": "Control room unificata per repository, coordinamento progetti, collaborazione design, strumenti AI opzionali e ambienti di sviluppo. Le porte browser restano HTTPS su localhost, mentre Postgres del layer workbench espone anche una porta TCP host-side.",
213211
"titleLines": {
214212
"first": "LAB",
215213
"second": "ATLAS"
@@ -255,13 +253,6 @@
255253
"description": "Forge Git interna per repository, issue, review e flusso di collaborazione tecnica del lab.",
256254
"title": "Gitea Forge"
257255
},
258-
"nextcloudAio": {
259-
"action": "Apri setup UI",
260-
"description": "Stack Nextcloud All-in-One self-hosted instradato dal gateway del lab. Espone l'applicazione sul suo URL dedicato dopo il completamento del setup guidato AIO.",
261-
"note": "Apri prima la setup UI. Quando l'applicazione Nextcloud chiede l'account admin iniziale, usa le credenziali mostrate qui. Dopo che AIO ha predisposto i container applicativi, l'URL principale risponde sulla porta gateway dedicata.",
262-
"title": "Nextcloud All-in-One",
263-
"usage": "deployment self-hosted"
264-
},
265256
"penpot": {
266257
"action": "Apri Penpot",
267258
"description": "Workspace Penpot self-hosted per product design, librerie condivise e collaborazione tra design e codice, con profilo root allineato al bootstrap del lab.",

0 commit comments

Comments
 (0)