You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> 🧭 **Atlas Lab** is a localhost-first self-hosted platform made of a Node.js/TypeScript CLI, a layered Docker Compose stack, and an operational React dashboard served by the gateway.
14
-
> It is designed to provide Git hosting, optional automation agents, optional local AI LLM services, optional AI image and video generation, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
14
+
> It is designed to provide Git hosting, optional local AI services with Open WebUI, Ollama, and n8n, browser-based development workbenches, and structured image/volume backup workflows on a single machine.
15
15
16
16
---
17
17
@@ -22,8 +22,8 @@ Atlas Lab is built for a practical goal: run a repeatable local engineering plat
22
22
### What it gives you
23
23
24
24
- 🧱 An always-on **core layer** with Gitea, the gateway, and Atlas Dashboard
25
-
- 🧠 An optional **AI LLM layer** with Open WebUIand Ollama
26
-
- 🛠️ An optional **workbench layer** with browser-based Node, Python, AI, and C++ environments plus shared PostgreSQL
25
+
- 🧠 An optional **AI LLM layer** with Open WebUI, Ollama, and n8n
26
+
- 🛠️ An optional **workbench layer** with browser-based Nodeand Python environments plus shared PostgreSQL
27
27
- 🔐 HTTPS-only ingress on `localhost`
28
28
- 📦 A self-contained npm package that can run without a local repository checkout
29
29
- 💾 Persistent state stored in named Docker volumes
@@ -67,7 +67,7 @@ Atlas Lab is split into **three explicit layers**:
67
67
| Layer | Status | Includes | Purpose |
68
68
| --- | --- | --- | --- |
69
69
|`core`| always on | gateway, Atlas Dashboard, Gitea, Gitea DB | baseline platform |
70
-
|`ai-llm`| optional | Open WebUI, Ollama, AI LLM gateway | local LLM workflows |
70
+
|`ai-llm`| optional | Open WebUI, Ollama, n8n, AI gateway | local AI workflows and automation|
Copy file name to clipboardExpand all lines: apps/atlas-dashboard/src/locales/en.json
+8-17Lines changed: 8 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -111,7 +111,6 @@
111
111
"rootName": "root name",
112
112
"rootPassword": "root password",
113
113
"rootUser": "root user",
114
-
"setupUrl": "setup URL",
115
114
"superuser": "superuser",
116
115
"wanModel": "wan model",
117
116
"usage": "usage"
@@ -123,11 +122,9 @@
123
122
"designCollaboration": "design collaboration",
124
123
"directAppOnboarding": "direct app onboarding",
125
124
"directAppLogin": "direct app login",
126
-
"guidedSetup": "guided setup",
127
125
"imageStudio": "image studio",
128
126
"localInferenceApi": "local inference API",
129
127
"password": "password",
130
-
"privateCloud": "private cloud",
131
128
"projectHub": "project hub",
132
129
"protectedApi": "protected API",
133
130
"sharedDatabase": "shared database",
@@ -136,10 +133,10 @@
136
133
"alwaysOnForge": "always-on forge"
137
134
},
138
135
"dashboard": {
139
-
"coreLayerSummary": "Gitea, Plane, Penpot, and Nextcloud AIO form the always-on core plane of the lab.",
136
+
"coreLayerSummary": "Gitea, Plane, and Penpot form the always-on core plane of the lab.",
140
137
"accessNotes": {
141
138
"aiDisabled": "The AI layer no longer starts by default: the deck marks it as optional instead of pretending that it is online.",
142
-
"aiEnabled": "Open WebUIand Ollama are really online and reachable on the AI gateway ports.",
139
+
"aiEnabled": "Open WebUI, Ollama, and n8n are online and reachable on the dedicated AI gateway ports.",
143
140
"credentials": "Operational credentials are exposed here and remain aligned with the lab bootstrap.",
144
141
"https": "All browser ingresses use localhost with dedicated HTTPS, without custom DNS or hosts-file edits.",
145
142
"workbenchDisabled": "Workbench and Postgres stay separated from the core operating plane until you enable the dedicated layer.",
@@ -148,18 +145,19 @@
148
145
"aiLayer": {
149
146
"capabilities": {
150
147
"llmModels": "GPU-backed LLM models",
148
+
"n8n": "Local n8n automation",
151
149
"ollama": "Protected Ollama API",
152
150
"openWebUi": "Local Open WebUI"
153
151
},
154
-
"description": "Optional AI layer for local conversational workflows and GPU-backed LLM inference. The deck enables it only when you explicitly request it.",
152
+
"description": "Optional AI layer for local conversational workflows, workflow orchestration, and GPU-backed LLM inference. The deck enables it only when you explicitly request it.",
155
153
"summaryDisabled": "The AI layer is off. No AI service is started or exposed until you enable the dedicated flag.",
156
-
"summaryEnabled": "Open WebUIand Ollama are active and served through the AI gateway.",
154
+
"summaryEnabled": "Open WebUI, Ollama, and n8n are active and served through the AI gateway.",
157
155
"title": "AI"
158
156
},
159
157
"aiServices": {
160
158
"n8n": {
161
159
"action": "Open n8n",
162
-
"description": "Workflow automation platform for orchestrating integrations, agents, and AI flows, even outside the lab's local runtime.",
160
+
"description": "Local workflow automation platform for orchestrating integrations, agents, and AI flows, with the bootstrap owner account aligned to the lab runtime.",
163
161
"title": "n8n"
164
162
},
165
163
"ollama": {
@@ -187,7 +185,7 @@
187
185
"label": "segmentation"
188
186
},
189
187
"usage": {
190
-
"body": "Gitea, Plane, Penpot, and Nextcloud AIO stay on as the core plane; AI, AI image, AI video, and workbench layers are enabled only when they are actually needed.",
188
+
"body": "Gitea, Plane, and Penpot stay on as the core plane; the AI and workbench layers are enabled only when they are actually needed.",
191
189
"label": "usage"
192
190
}
193
191
},
@@ -209,7 +207,7 @@
209
207
"networkMapDescription": "Read the lab topology and the published network planes.",
210
208
"networkMapLabel": "Network map"
211
209
},
212
-
"summary": "Unified control room for repository work, project coordination, design collaboration, private cloud access, optional AI tooling, and development environments. Browser ports stay on HTTPS over localhost, while Postgres from the workbench layer also exposes a host-side TCP port.",
210
+
"summary": "Unified control room for repository work, project coordination, design collaboration, optional AI tooling, and development environments. Browser ports stay on HTTPS over localhost, while Postgres from the workbench layer also exposes a host-side TCP port.",
213
211
"titleLines": {
214
212
"first": "LAB",
215
213
"second": "ATLAS"
@@ -255,13 +253,6 @@
255
253
"description": "Internal Git forge for repositories, issues, review, and the lab's technical collaboration flow.",
256
254
"title": "Gitea Forge"
257
255
},
258
-
"nextcloudAio": {
259
-
"action": "Open setup UI",
260
-
"description": "Self-hosted Nextcloud All-in-One stack routed through the lab gateway. It exposes the application on its dedicated URL once the guided AIO setup has completed.",
261
-
"note": "Use the setup UI first. When the Nextcloud application asks for the initial admin account, use the credentials listed here. After AIO provisions the application containers, the main app URL becomes available on the dedicated gateway port.",
262
-
"title": "Nextcloud All-in-One",
263
-
"usage": "self-hosted deployment"
264
-
},
265
256
"penpot": {
266
257
"action": "Open Penpot",
267
258
"description": "Self-hosted Penpot workspace for product design, shared libraries, and collaboration across design and code, with a bootstrap root profile aligned to the lab.",
Copy file name to clipboardExpand all lines: apps/atlas-dashboard/src/locales/it.json
+8-17Lines changed: 8 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -111,7 +111,6 @@
111
111
"rootName": "nome root",
112
112
"rootPassword": "password root",
113
113
"rootUser": "utente root",
114
-
"setupUrl": "URL setup",
115
114
"superuser": "superuser",
116
115
"wanModel": "modello wan",
117
116
"usage": "uso"
@@ -123,11 +122,9 @@
123
122
"designCollaboration": "collaborazione design",
124
123
"directAppOnboarding": "onboarding diretto in app",
125
124
"directAppLogin": "login diretto applicazione",
126
-
"guidedSetup": "setup guidato",
127
125
"imageStudio": "studio immagini",
128
126
"localInferenceApi": "API di inference locale",
129
127
"password": "password",
130
-
"privateCloud": "cloud privato",
131
128
"projectHub": "hub progetti",
132
129
"protectedApi": "API protetta",
133
130
"sharedDatabase": "database condiviso",
@@ -136,10 +133,10 @@
136
133
"alwaysOnForge": "forge sempre accesa"
137
134
},
138
135
"dashboard": {
139
-
"coreLayerSummary": "Gitea, Plane, Penpot e Nextcloud AIO formano il piano core sempre acceso del lab.",
136
+
"coreLayerSummary": "Gitea, Plane e Penpot formano il piano core sempre acceso del lab.",
140
137
"accessNotes": {
141
138
"aiDisabled": "Il layer AI non viene piu acceso di default: il deck lo marca come opzionale invece di fingere che sia online.",
142
-
"aiEnabled": "Open WebUI e Ollama sono realmente online e raggiungibili sulle porte AI del gateway.",
139
+
"aiEnabled": "Open WebUI, Ollama e n8n sono online e raggiungibili sulle porte dedicate del gateway AI.",
143
140
"credentials": "Le credenziali operative sono esposte qui e restano allineate al bootstrap del lab.",
144
141
"https": "Tutti gli ingressi browser usano localhost con HTTPS dedicato, senza DNS custom o file hosts.",
145
142
"workbenchDisabled": "Workbench e Postgres restano separati dal core operativo finche non abiliti il layer dedicato.",
@@ -148,18 +145,19 @@
148
145
"aiLayer": {
149
146
"capabilities": {
150
147
"llmModels": "modelli LLM GPU-backed",
148
+
"n8n": "automazione n8n locale",
151
149
"ollama": "API Ollama protetta",
152
150
"openWebUi": "Open WebUI locale"
153
151
},
154
-
"description": "Layer AI opzionale per console conversazionale locale e inference LLM GPU-backed. Il deck lo attiva solo quando lo chiedi esplicitamente.",
152
+
"description": "Layer AI opzionale per console conversazionale locale, orchestrazione workflow e inference LLM GPU-backed. Il deck lo attiva solo quando lo chiedi esplicitamente.",
155
153
"summaryDisabled": "Layer AI spento. Nessun servizio AI viene avviato o esposto finche non abiliti il flag dedicato.",
156
-
"summaryEnabled": "Open WebUI e Ollama sono attivi e serviti dal gateway AI.",
154
+
"summaryEnabled": "Open WebUI, Ollama e n8n sono attivi e serviti dal gateway AI.",
157
155
"title": "AI"
158
156
},
159
157
"aiServices": {
160
158
"n8n": {
161
159
"action": "Apri n8n",
162
-
"description": "Piattaforma di automazione workflow utile per orchestrare integrazioni, agenti e flussi AI anche fuori dal runtime locale del lab.",
160
+
"description": "Piattaforma locale di automazione workflow per orchestrare integrazioni, agenti e flussi AI, con owner bootstrap allineato al runtime del lab.",
163
161
"title": "n8n"
164
162
},
165
163
"ollama": {
@@ -187,7 +185,7 @@
187
185
"label": "segmentazione"
188
186
},
189
187
"usage": {
190
-
"body": "Gitea, Plane, Penpot e Nextcloud AIO restano sempre attivi nel piano core; AI, AI image, AI video e workbench vengono abilitati a layer solo quando servono davvero.",
188
+
"body": "Gitea, Plane e Penpot restano sempre attivi nel piano core; i layer AI e workbench vengono abilitati solo quando servono davvero.",
191
189
"label": "uso"
192
190
}
193
191
},
@@ -209,7 +207,7 @@
209
207
"networkMapDescription": "Leggi la topologia del lab e i piani di rete pubblicati.",
210
208
"networkMapLabel": "Network map"
211
209
},
212
-
"summary": "Control room unificata per repository, coordinamento progetti, collaborazione design, cloud privato, strumenti AI opzionali e ambienti di sviluppo. Le porte browser restano HTTPS su localhost, mentre Postgres del layer workbench espone anche una porta TCP host-side.",
210
+
"summary": "Control room unificata per repository, coordinamento progetti, collaborazione design, strumenti AI opzionali e ambienti di sviluppo. Le porte browser restano HTTPS su localhost, mentre Postgres del layer workbench espone anche una porta TCP host-side.",
213
211
"titleLines": {
214
212
"first": "LAB",
215
213
"second": "ATLAS"
@@ -255,13 +253,6 @@
255
253
"description": "Forge Git interna per repository, issue, review e flusso di collaborazione tecnica del lab.",
256
254
"title": "Gitea Forge"
257
255
},
258
-
"nextcloudAio": {
259
-
"action": "Apri setup UI",
260
-
"description": "Stack Nextcloud All-in-One self-hosted instradato dal gateway del lab. Espone l'applicazione sul suo URL dedicato dopo il completamento del setup guidato AIO.",
261
-
"note": "Apri prima la setup UI. Quando l'applicazione Nextcloud chiede l'account admin iniziale, usa le credenziali mostrate qui. Dopo che AIO ha predisposto i container applicativi, l'URL principale risponde sulla porta gateway dedicata.",
262
-
"title": "Nextcloud All-in-One",
263
-
"usage": "deployment self-hosted"
264
-
},
265
256
"penpot": {
266
257
"action": "Apri Penpot",
267
258
"description": "Workspace Penpot self-hosted per product design, librerie condivise e collaborazione tra design e codice, con profilo root allineato al bootstrap del lab.",
0 commit comments