Skip to content

Commit baf8606

Browse files
lsoldadoDavidsonGomesDavidson Gomesclaude
authored
fix(plugins+triggers): private-repo update flow + ClickUp webhook compat + DetachedInstanceError (#51)
* refactor(telegram): centralize notifications in routines, remove from skills Move Telegram reply() out of skill SKILL.md files into the routine .py callers via notify_telegram=True on run_skill(). This guarantees exactly one send per execution — the instruction is appended at the end of the prompt after all skill steps, so the agent cannot send it early. - runner.py: add notify_telegram param to run_skill() — reads chat_id from TELEGRAM_CHAT_ID env, appends explicit one-shot instruction - Skills cleaned: prod-end-of-day, prod-good-morning, pulse-faq-sync, pulse-daily (custom files gitignored, updated locally) - Routines updated: end_of_day, good_morning (custom routines gitignored) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(plugins): support auth_token on update/preview and update endpoints for private repos The install endpoint (POST /api/plugins/install and /api/plugins/preview) already accepts `auth_token` in the JSON body and forwards it to `PluginInstaller.resolve_source`, enabling installs from private GitHub repos. However, the update flow had no path for the token: - `GET /api/plugins/<slug>/update/preview` has no request body, and the `_compute_preview()` helper called `resolve_source(source_url)` without a token — any private-repo candidate failed with HTTP 500 `fetch_failed`. - `POST /api/plugins/<slug>/update` accepted a JSON body but read only `source_url`, ignoring any `auth_token` the caller might have supplied. This meant once a plugin was installed from a private repo, the user could not preview or apply updates without uninstalling and reinstalling — the UI offered no way out. Changes: **Backend** (`dashboard/backend/routes/plugins.py`) - `_compute_preview(slug, source_url, auth_token=None)` — new optional param, forwarded to `PluginInstaller.resolve_source(...)`. - `preview_plugin_update` (GET) — reads `X-Plugin-Auth-Token` header (header rather than query param to keep the PAT out of access logs) and passes it down. Cache key extended to `(slug, source_url, bool(auth_token))` so private-repo previews are not served to unauthenticated callers. - `update_plugin` (POST) — reads `auth_token` from the JSON body (same shape as `/api/plugins/install`), falling back to the `X-Plugin-Auth-Token` header for callers that reuse the header from preview. **Frontend** (`UpdatePreviewModal.tsx`, `lib/api.ts`) - `api.get(path, extraHeaders?)` — optional `extraHeaders` parameter. - `UpdatePreviewModal` — collapsible "🔒 Private repository? (optional GitHub PAT)" section, rendered when `sourceUrl` starts with `github:`. Auto-opens when the initial preview fails with 401/404/fetch_failed. A password-type input + Retry button re-runs the preview with the token as the `X-Plugin-Auth-Token` header. On apply, the token is added to the POST body as `auth_token`. - Token lives only in component state — never persisted, never logged. No behavior change for public-repo plugins. Fully backwards compatible. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(plugins): editable source override + clearer ref-not-found errors Two related improvements to the update flow surfaced while validating the auth_token fix: 1. Editable source override in UpdatePreviewModal The modal previously used the install-time `source_url` verbatim. When a plugin was installed pinned to a specific tag (e.g. `@v0.1.0`), the preview always fetched that exact tag — newer releases were invisible without uninstalling/reinstalling or editing the DB by hand. Adds a collapsible "📦 Source <current>" section with an editable input and a Preview button. The user can point at `@main`, a newer tag, or a different branch and re-run the diff in place. The override is also forwarded to POST /api/plugins/<slug>/update so apply uses the same resolved source. Override lives in component state only. 2. Unified branch-and-tag ref-not-found error in plugin_loader `resolve_source` and `resolve_source_with_sha` already try `refs/heads/<ref>` first, falling back to `refs/tags/<ref>`. When both fail the previous behavior surfaced only the second (tag) error — so a typo in a branch name produced a confusing "refs/tags/...: 404" message. Catches both `RuntimeError`s and raises a unified message: ref '<ref>' not found in <owner>/<repo> (tried branches and tags). Branch: <branch_err>. Tag: <tag_err>. This also disambiguates ref-not-found from private-repo auth failures (which now have a working path via the auth_token header from the previous commit in this PR). Verified end-to-end on 0.32.x: bad ref produces the new unified error, good ref still resolves identically, and the source override box lets the modal upgrade a `@v0.1.1`-pinned install to `@main` without touching the DB. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(triggers): accept X-Signature header + capture IDs before thread handoff Two issues surfaced while wiring a ClickUp webhook to a `source: custom` trigger end-to-end: 1. ClickUp signs requests with `X-Signature` (HMAC-SHA256, hex), not `X-Webhook-Signature`. _validate_webhook_signature for the `else` branch only checked the latter, so every ClickUp POST was silently rejected and the action never ran. Receiver still returned 200 (uniform response, F6) so it looked like webhooks were being delivered while in reality nothing fired. Fix: try `X-Webhook-Signature` first, fall back to `X-Signature`. Both carry the same hex HMAC-SHA256 of the raw body (with optional `sha256=` prefix that gets stripped). This unblocks ClickUp without adding a dedicated source enum value. 2. DetachedInstanceError on every successful webhook + every test run. Both `webhook_receiver` and `test_trigger` did: db.session.commit() def _run(): with app.app_context(): _execute_trigger(trigger.id, execution.id, ...) Thread(target=_run).start() The `trigger.id` / `execution.id` accesses run inside the worker thread, AFTER the request scope tears down and the SQLAlchemy session closes — triggering DetachedInstanceError. The thread crashed before `_execute_trigger` could even mark status='running', leaving rows in `pending` forever. Fix: snapshot the IDs (plain ints) before starting the thread, pass those to `_execute_trigger`. Same pattern already used elsewhere; the fix is local to the two call sites. Verified end-to-end on 0.32.x with a real ClickUp webhook firing `taskAssigneeUpdated`: handler runs, posts a comment back to the task, adds `jarvis-working` tag. trigger_executions.status transitions pending → running → completed instead of remaining pending. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(terminal): proxy terminal-server through Flask + drop direct cross-port fetch Browsers connecting to the dashboard from any host other than the one running the Node terminal-server hit three different walls before reaching its random port (default 32352): 1. CSP — the dashboard sets `connect-src 'self'`. Even when the page is served from `localhost:8080` and the terminal-server is on the same machine at `localhost:32352`, the browser refuses the cross-port fetch with: "Refused to connect because it violates the document's Content Security Policy." 2. CORS preflight — different ports are different origins; the terminal-server doesn't currently emit the headers needed to satisfy a preflight from a non-trivial fetch. 3. Reachability — SSH tunnels (`ssh -L 8080:localhost:8080`) and reverse proxies (Tailscale Funnel, nginx in front of a private host) typically only expose the dashboard port. The dynamic terminal-server port is unreachable from the browser entirely. The fix is to mount an HTTP+WebSocket proxy on the dashboard's Flask app at `/terminal/*` and route all consumers through it. Same origin → CSP passes, no preflight, single port to expose. Backend (`dashboard/backend/routes/terminal_proxy.py`) - New blueprint with HTTP catch-all that forwards method, headers (minus hop-by-hop), query string, and streamed body to `http://127.0.0.1:32352/<path>` via `requests`. Auth gated by `@login_required` — only logged-in users reach the upstream. - `register_websocket_proxy(sock)` registers `/terminal/ws` on the flask-sock instance, opening one upstream WS per client and pumping bytes both directions on a daemon thread. - Upstream host/port overridable via `TERMINAL_SERVER_HOST` / `TERMINAL_SERVER_PORT` env vars. Adds `websocket-client` dep. Backend (`dashboard/backend/app.py`) - Imports the new blueprint. - After `register_blueprint(triggers_bp)`: registers `terminal_proxy_bp`, instantiates a `Sock(app)`, and calls `register_websocket_proxy()`. Swallows ImportError so the dashboard still boots if `websocket-client` or `flask-sock` is missing — terminal features just stay direct-connect. Frontend (`UpdatePreviewModal.tsx`, `lib/api.ts`, `lib/terminal-url.ts`, `components/AgentTerminal.tsx`, `pages/AgentDetail.tsx`) - Three places had their own `isLocal ? "<host>:32352" : "/terminal"` ternary. Each one drove a different code path that bypassed the proxy on private/loopback hostnames — the exact case CSP blocks. Collapsed all three to the same rule: production builds always go through `/terminal`. Vite dev (`npm run dev`, no proxy) keeps the direct path as before. - Renamed the boolean from `isLocal` to `isViteDev` to make the intent unambiguous. Verified end-to-end on 0.32.x: dashboard at `http://localhost:8080` loads, navigates to an agent, and opens a terminal session over the proxy. CSP no longer fires, no DevTools console errors, and the WS proxy tunnels keystrokes / output bidirectionally. Same setup also works behind a Tailscale Funnel public URL. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(terminal): require auth on /terminal/ws to prevent unauthenticated PTY access The dashboard's ``auth_middleware`` only gates ``/api/*`` and ``/ws/*`` paths, so the new terminal proxy mounted at ``/terminal/ws`` was reachable without authentication. Anyone able to hit the dashboard (LAN, Tailscale Funnel, Cloudflare Tunnel, public VPS — exactly the scenarios this PR intended to support) could open a PTY on the host. Add an explicit ``current_user.is_authenticated`` guard inside ``proxy_ws`` mirroring the ``@login_required`` decorator already on the HTTP path. flask-login reads the session cookie from the WebSocket upgrade request, so the check works the same way it does for normal routes. Also drop two unused imports (``urlparse``, ``WebSocketApp``) flagged during review. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Davidson Gomes <davidsongviolao@gmail.com> Co-authored-by: Davidson Gomes <davidson.gomes@etus.com.br> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent 27ef039 commit baf8606

10 files changed

Lines changed: 501 additions & 58 deletions

File tree

dashboard/backend/app.py

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -834,6 +834,7 @@ def auth_middleware():
834834
from routes.mempalace import bp as mempalace_bp
835835
from routes.tasks import bp as tasks_bp
836836
from routes.triggers import bp as triggers_bp
837+
from routes.terminal_proxy import bp as terminal_proxy_bp, register_websocket_proxy as _register_terminal_ws
837838
from routes.backups import bp as backups_bp
838839
from routes.providers import bp as providers_bp
839840
from routes.settings import bp as settings_bp
@@ -887,6 +888,25 @@ def auth_middleware():
887888
app.register_blueprint(mempalace_bp)
888889
app.register_blueprint(tasks_bp)
889890
app.register_blueprint(triggers_bp)
891+
app.register_blueprint(terminal_proxy_bp)
892+
893+
# Mount the terminal-server WebSocket proxy on the same Sock instance the
894+
# rest of the app uses. Done after the blueprint is registered so route
895+
# names are unique. Without this, browsers connecting from a host other
896+
# than the one running the Node terminal-server (LAN, Tailscale Funnel,
897+
# SSH tunnel without the dynamic port forwarded) cannot reach it directly
898+
# due to CORS preflight + private-network-access policies.
899+
try:
900+
from flask_sock import Sock as _Sock
901+
_terminal_sock = _Sock(app)
902+
_register_terminal_ws(_terminal_sock)
903+
except Exception as _exc:
904+
import logging as _logging
905+
_logging.getLogger(__name__).warning(
906+
"terminal_proxy: failed to mount WebSocket proxy: %s — terminal "
907+
"interactions will require direct access to the terminal-server port.",
908+
_exc,
909+
)
890910
app.register_blueprint(backups_bp)
891911
app.register_blueprint(providers_bp)
892912
app.register_blueprint(settings_bp)

dashboard/backend/plugin_loader.py

Lines changed: 32 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -290,10 +290,23 @@ def resolve_source(source_url: str, auth_token: str | None = None) -> Path:
290290
staging_slug = f"{owner}-{repo}-{ref}".replace("/", "-")
291291
try:
292292
return PluginInstaller.fetch_from_tarball(tar_url, staging_slug, auth_token=auth_token)
293-
except RuntimeError:
294-
# fallback: try as tag ref
293+
except RuntimeError as branch_err:
294+
# fallback: try as tag ref. If that also fails, surface a
295+
# unified error so the caller knows both branch and tag
296+
# namespaces were tried — otherwise the bare
297+
# "refs/tags/<ref>: 404" message is misleading when the ref
298+
# was actually a branch name.
295299
tar_url_tag = f"https://codeload.github.com/{owner}/{repo}/tar.gz/refs/tags/{ref}"
296-
return PluginInstaller.fetch_from_tarball(tar_url_tag, staging_slug, auth_token=auth_token)
300+
try:
301+
return PluginInstaller.fetch_from_tarball(
302+
tar_url_tag, staging_slug, auth_token=auth_token
303+
)
304+
except RuntimeError as tag_err:
305+
raise RuntimeError(
306+
f"ref '{ref}' not found in {owner}/{repo} "
307+
f"(tried branches and tags). "
308+
f"Branch: {branch_err}. Tag: {tag_err}."
309+
) from tag_err
297310

298311
if s.startswith("https://"):
299312
# Use a safe staging slug derived from the URL
@@ -477,12 +490,23 @@ def resolve_source_with_sha(
477490
tar_url, staging_slug, auth_token=auth_token
478491
)
479492
return path, sha
480-
except RuntimeError:
493+
except RuntimeError as branch_err:
481494
tar_url_tag = f"https://codeload.github.com/{owner}/{repo}/tar.gz/refs/tags/{ref}"
482-
path, sha = PluginInstaller.fetch_from_tarball_with_sha(
483-
tar_url_tag, staging_slug, auth_token=auth_token
484-
)
485-
return path, sha
495+
try:
496+
path, sha = PluginInstaller.fetch_from_tarball_with_sha(
497+
tar_url_tag, staging_slug, auth_token=auth_token
498+
)
499+
return path, sha
500+
except RuntimeError as tag_err:
501+
# Both branch and tag namespaces exhausted — raise a
502+
# unified error rather than the bare tag 404 so callers
503+
# can distinguish ref-not-found from private-repo auth
504+
# failures. Kept in sync with ``resolve_source`` above.
505+
raise RuntimeError(
506+
f"ref '{ref}' not found in {owner}/{repo} "
507+
f"(tried branches and tags). "
508+
f"Branch: {branch_err}. Tag: {tag_err}."
509+
) from tag_err
486510

487511
# Plain https:// tarball
488512
staging_slug = re.sub(r"[^a-zA-Z0-9_-]+", "-", s)[-80:]

dashboard/backend/routes/plugins.py

Lines changed: 38 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2158,7 +2158,10 @@ def serve_widget(slug: str, subpath: str):
21582158
# ---------------------------------------------------------------------------
21592159

21602160
# Module-level preview cache: key=(slug, source_url), value=(fetched_at_float, result_dict)
2161-
_PREVIEW_CACHE: dict[tuple[str, str], tuple[float, dict]] = {}
2161+
# Cache key: (slug, source_url, has_auth_token). has_auth_token flag prevents
2162+
# leaking private-repo previews to unauthenticated callers — the value of the
2163+
# token is deliberately not part of the key (no secrets in memory indices).
2164+
_PREVIEW_CACHE: dict[tuple[str, str, bool], tuple[float, dict]] = {}
21622165
_PREVIEW_CACHE_LOCK = threading.Lock()
21632166
_PREVIEW_CACHE_TTL = 300 # seconds
21642167

@@ -2264,7 +2267,9 @@ def _extract_ids_and_entries(manifest: dict) -> dict[str, dict[str, Any]]:
22642267
return added, removed, modified
22652268

22662269

2267-
def _compute_preview(slug: str, source_url: str) -> dict:
2270+
def _compute_preview(
2271+
slug: str, source_url: str, auth_token: str | None = None
2272+
) -> dict:
22682273
"""Fetch candidate manifest and compute diff against installed manifest.
22692274
22702275
Pure read-only: no DB writes, no file writes to plugins/{slug}/.
@@ -2273,6 +2278,12 @@ def _compute_preview(slug: str, source_url: str) -> dict:
22732278
tarball_sha7 derivation: SHA256 of sorted manifest.files SHAs concatenated.
22742279
If manifest.files is empty, falls back to SHA256 of serialized manifest JSON.
22752280
First 7 chars of the hex digest are returned (deterministic, no re-download needed).
2281+
2282+
Args:
2283+
slug: Installed plugin slug.
2284+
source_url: Candidate source (github:..., https://...).
2285+
auth_token: Optional GitHub PAT — required for private repos. Sent as
2286+
``Authorization: token <pat>`` header by ``resolve_source``.
22762287
"""
22772288
from plugin_schema import load_plugin_manifest
22782289
from plugin_loader import PluginInstaller, _parse_version
@@ -2316,7 +2327,7 @@ def _compute_preview(slug: str, source_url: str) -> dict:
23162327

23172328
# Resolve candidate source (may hit network / tmp — outside the lock)
23182329
try:
2319-
new_plugin_dir = PluginInstaller.resolve_source(source_url)
2330+
new_plugin_dir = PluginInstaller.resolve_source(source_url, auth_token=auth_token)
23202331
except ValueError as exc:
23212332
raise ValueError(f"invalid_source: {exc}") from exc
23222333
except RuntimeError as exc:
@@ -2422,6 +2433,11 @@ def preview_plugin_update(slug: str):
24222433
"""Read-only diff preview before applying an update.
24232434
24242435
Query param: ?source=<url> (defaults to installed source_url when omitted)
2436+
Optional header ``X-Plugin-Auth-Token``: GitHub PAT required to fetch
2437+
candidate from a private repository (same semantics as ``auth_token`` on
2438+
``POST /api/plugins/preview``). Kept out of the query string so it does
2439+
not leak into access logs.
2440+
24252441
Returns 200 with diff JSON (or up_to_date: true).
24262442
Never writes to disk or DB.
24272443
"""
@@ -2440,7 +2456,12 @@ def preview_plugin_update(slug: str):
24402456
if not source_url:
24412457
return jsonify({"error": "invalid_source", "message": "No source URL provided and none stored"}), 400
24422458

2443-
cache_key = (slug, source_url)
2459+
# Optional GitHub PAT for private repos (header only — never logged)
2460+
auth_token = request.headers.get("X-Plugin-Auth-Token") or None
2461+
2462+
# Cache key includes auth_token presence (not value) to avoid sharing
2463+
# private-repo results across unauthenticated requests.
2464+
cache_key = (slug, source_url, bool(auth_token))
24442465

24452466
# Cache read — lock only around dict access, not network I/O
24462467
with _PREVIEW_CACHE_LOCK:
@@ -2452,7 +2473,7 @@ def preview_plugin_update(slug: str):
24522473

24532474
# Cache miss — compute outside lock
24542475
try:
2455-
result = _compute_preview(slug, source_url)
2476+
result = _compute_preview(slug, source_url, auth_token=auth_token)
24562477
except ValueError as exc:
24572478
msg = str(exc)
24582479
if msg.startswith("invalid_source:"):
@@ -2543,10 +2564,21 @@ def update_plugin(slug: str):
25432564
data = request.get_json(force=True, silent=True) or {}
25442565
source_url = data.get("source_url", installed_source)
25452566

2567+
# Optional GitHub PAT for private repos. Mirrors /api/plugins/install —
2568+
# body field first, falling back to ``X-Plugin-Auth-Token`` header for
2569+
# callers that reuse the same header they sent to update/preview.
2570+
auth_token = (
2571+
data.get("auth_token")
2572+
or request.headers.get("X-Plugin-Auth-Token")
2573+
or None
2574+
)
2575+
25462576
# 3. Resolve new plugin source (accepts local path, github:..., https://...)
25472577
from plugin_loader import PluginInstaller
25482578
try:
2549-
new_plugin_dir = PluginInstaller.resolve_source(source_url)
2579+
new_plugin_dir = PluginInstaller.resolve_source(
2580+
source_url, auth_token=auth_token
2581+
)
25502582
except ValueError as exc:
25512583
return jsonify({"error": "invalid_source", "message": str(exc)}), 400
25522584
except RuntimeError as exc:
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
"""Proxy HTTP and WebSocket traffic to the local terminal-server.
2+
3+
The terminal-server (Node, dashboard/terminal-server/bin/server.js) binds to
4+
a random port (commonly 32352) on 0.0.0.0. Browsers connecting to the
5+
dashboard from a different host than `localhost` historically had to hit
6+
that port directly, which fails in three common scenarios:
7+
8+
1. Browsing via SSH tunnel (`ssh -L 8080:localhost:8080`) — only port 8080
9+
is forwarded, the random terminal-server port is not.
10+
2. Browsing via a public tunnel (Tailscale Funnel, Cloudflare Tunnel, an
11+
nginx reverse proxy on a VPS) — only the dashboard port is exposed.
12+
3. Browsing on a LAN where macOS Application Firewall hasn't whitelisted
13+
the random port for the Node binary.
14+
15+
Mounting a proxy on the same Flask app the user is already authenticated
16+
to fixes all three: the terminal-server is reachable wherever the
17+
dashboard is reachable, on the same origin, with no extra ports to
18+
expose.
19+
20+
This module is intentionally minimal — it forwards bytes both ways for
21+
HTTP and WebSocket; it does not inspect or rewrite payloads.
22+
"""
23+
24+
from __future__ import annotations
25+
26+
import logging
27+
import os
28+
import threading
29+
30+
import requests
31+
from flask import Blueprint, Response, request, stream_with_context
32+
from flask_login import current_user, login_required
33+
34+
log = logging.getLogger(__name__)
35+
36+
bp = Blueprint("terminal_proxy", __name__)
37+
38+
# Where the local terminal-server lives. The Node script defaults to 32352
39+
# but can be overridden — keep this in sync via env var.
40+
TERMINAL_HOST = os.environ.get("TERMINAL_SERVER_HOST", "127.0.0.1")
41+
TERMINAL_PORT = int(os.environ.get("TERMINAL_SERVER_PORT", "32352"))
42+
TERMINAL_HTTP_BASE = f"http://{TERMINAL_HOST}:{TERMINAL_PORT}"
43+
TERMINAL_WS_BASE = f"ws://{TERMINAL_HOST}:{TERMINAL_PORT}"
44+
45+
# Hop-by-hop headers that must not be forwarded (RFC 7230 §6.1).
46+
_HOP_BY_HOP = frozenset(
47+
{
48+
"connection",
49+
"keep-alive",
50+
"proxy-authenticate",
51+
"proxy-authorization",
52+
"te",
53+
"trailers",
54+
"transfer-encoding",
55+
"upgrade",
56+
"host",
57+
"content-length", # let requests/Flask compute it
58+
}
59+
)
60+
61+
62+
def _forward_headers(src: dict[str, str]) -> dict[str, str]:
63+
"""Strip hop-by-hop headers before forwarding either direction."""
64+
return {k: v for k, v in src.items() if k.lower() not in _HOP_BY_HOP}
65+
66+
67+
# ---------------------------------------------------------------------------
68+
# HTTP proxy — covers /api/health, /api/sessions/*, /api/notifications/*, etc.
69+
# ---------------------------------------------------------------------------
70+
71+
@bp.route(
72+
"/terminal/<path:subpath>",
73+
methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"],
74+
)
75+
@bp.route("/terminal", methods=["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"])
76+
@login_required
77+
def proxy_http(subpath: str = ""):
78+
"""Forward HTTP traffic to the local terminal-server."""
79+
target = f"{TERMINAL_HTTP_BASE}/{subpath}"
80+
if request.query_string:
81+
target = f"{target}?{request.query_string.decode('latin-1')}"
82+
83+
try:
84+
upstream = requests.request(
85+
method=request.method,
86+
url=target,
87+
headers=_forward_headers(dict(request.headers)),
88+
data=request.get_data(),
89+
allow_redirects=False,
90+
stream=True,
91+
timeout=30,
92+
)
93+
except requests.exceptions.ConnectionError:
94+
return (
95+
"Terminal-server is not running. Start it via `make terminal-server` "
96+
"or `node dashboard/terminal-server/bin/server.js --dev`.",
97+
503,
98+
)
99+
except requests.exceptions.Timeout:
100+
return "Terminal-server timed out.", 504
101+
102+
# Pass through status, body, headers (minus hop-by-hop).
103+
response = Response(
104+
stream_with_context(upstream.iter_content(chunk_size=8192)),
105+
status=upstream.status_code,
106+
)
107+
for key, value in upstream.headers.items():
108+
if key.lower() not in _HOP_BY_HOP:
109+
response.headers[key] = value
110+
return response
111+
112+
113+
# ---------------------------------------------------------------------------
114+
# WebSocket proxy — terminal stream + notifications
115+
# ---------------------------------------------------------------------------
116+
# Registered at app-creation time via `register_websocket_proxy(sock)` so
117+
# we can use the shared `flask_sock.Sock` instance the rest of the app uses.
118+
119+
def register_websocket_proxy(sock) -> None:
120+
"""Register the /terminal/ws WebSocket proxy on the given Sock instance.
121+
122+
Why not a plain `@bp.route` decorator: flask-sock requires its own
123+
`@sock.route(...)` decorator, and the Sock instance is created in
124+
``app.py``. Calling this from there keeps the dependency one-way.
125+
"""
126+
try:
127+
from websocket import create_connection # type: ignore
128+
except ImportError:
129+
log.warning(
130+
"terminal_proxy.register_websocket_proxy: websocket-client not "
131+
"installed; WebSocket proxy disabled. Add `websocket-client` "
132+
"to dependencies."
133+
)
134+
return
135+
136+
@sock.route("/terminal/ws")
137+
def proxy_ws(client_ws):
138+
"""Bidirectional bridge: browser <-> Flask <-> terminal-server.
139+
140+
Auth: the global ``auth_middleware`` only gates ``/api/*`` and
141+
``/ws/*`` paths, so a request to ``/terminal/ws`` is *not* checked
142+
upstream. Without the explicit ``current_user.is_authenticated``
143+
guard below, anyone able to reach the dashboard (LAN, Tailscale
144+
Funnel, Cloudflare Tunnel, public VPS) could open a PTY on the
145+
host. flask-login's session cookie is read from the WS upgrade
146+
request, so this works the same as ``@login_required`` on HTTP
147+
routes.
148+
"""
149+
if not current_user.is_authenticated:
150+
try:
151+
client_ws.close(reason="auth required")
152+
except Exception:
153+
pass
154+
return
155+
156+
target = f"{TERMINAL_WS_BASE}/ws"
157+
try:
158+
upstream = create_connection(target, timeout=10)
159+
except Exception as exc:
160+
log.warning("terminal_proxy: upstream WS connect failed: %s", exc)
161+
try:
162+
client_ws.close(reason=f"upstream unreachable: {exc}")
163+
except Exception:
164+
pass
165+
return
166+
167+
stop = threading.Event()
168+
169+
def _pump_upstream_to_client():
170+
try:
171+
while not stop.is_set():
172+
msg = upstream.recv()
173+
if msg is None or msg == b"":
174+
break
175+
if isinstance(msg, bytes):
176+
client_ws.send(msg)
177+
else:
178+
client_ws.send(msg)
179+
except Exception:
180+
pass
181+
finally:
182+
stop.set()
183+
try:
184+
client_ws.close()
185+
except Exception:
186+
pass
187+
188+
t = threading.Thread(target=_pump_upstream_to_client, daemon=True)
189+
t.start()
190+
191+
try:
192+
while not stop.is_set():
193+
msg = client_ws.receive(timeout=30)
194+
if msg is None:
195+
break
196+
upstream.send(msg)
197+
except Exception:
198+
pass
199+
finally:
200+
stop.set()
201+
try:
202+
upstream.close()
203+
except Exception:
204+
pass

0 commit comments

Comments
 (0)