You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You’re a researcher or developer. Your data and code live on an HPC cluster, your WSL setup, or your lab macOS machine — but you’re not always sitting at your workstation.
38
38
39
39
Sometimes you’re travelling, commuting, or just away from your laptop. You still want to run a quick analysis, check results, or ask an AI agent to sanity-check something. Or you want AI agent to start planning and implementing a long analysis while you take a rest.
40
40
41
-
**HPC-Relay lets you do that from your phone.**
41
+
**OpencodeClaw lets you do that from your phone.**
42
42
43
43
Just message your workstation or HPC through Telegram.
44
44
@@ -71,6 +71,7 @@ You authenticate **once**. Everything else is automatic.
71
71
|**No VPN on phone**| Relay sits inside the authenticated network |
72
72
|**AI session memory**| Parses & re-injects `sessionID` -- full context from cold start |
73
73
|**File transfer**| Download files from the target machine to Telegram (`/send <path>`) |
74
+
|**Voice notes / audio**| Telegram voice notes can be received and processed; optional local `faster-whisper` pre-transcription for low-latency speech-to-text |
74
75
|**File upload**| Upload files from target to cloud (`/upload <path>`) |
|`/upload <path>`| Upload a file from target machine to cloud |`/upload ~/data/input.csv`|
106
107
|`/scheduled`| Open scheduled tasks manager (edit/delete interactively) |`/scheduled`|
108
+
| voice / audio / video note | Upload from Telegram and analyze with the agent; for best latency, enable optional local `faster-whisper` transcription | send a voice note |
@@ -262,6 +264,32 @@ Generate an expression heatmap with random data.
262
264
263
265
The AI executes on the target machine and streams the response back -- formatted, chunked, and readable.
264
266
267
+
### Optional: Fast local speech-to-text (recommended)
268
+
269
+
Telegram voice notes usually arrive as **OGG/Opus**. OpencodeClaw can forward audio directly to the agent, but this is often slower than pre-transcribing locally.
270
+
271
+
For lower latency, install **ffmpeg** + **faster-whisper** and let the relay transcribe pure voice/audio messages first, then send the transcript to the agent.
-**Python package** lives in your shared relay venv (recommended: `~/.venvs/OpencodeClaw`)
290
+
-**Model weights** live in the user cache, not inside the venv itself
291
+
- If local Whisper is not installed, the relay can still fall back to agent-side audio analysis (slower)
292
+
265
293
---
266
294
267
295
## Chat History Viewer
@@ -296,7 +324,7 @@ python3 tools/chat_viewer.py
296
324
297
325
## Supported Models
298
326
299
-
HPC-Relay works with [OpenCode](https://opencode.ai), which routes to every major AI provider. You can also swap in **Claude Code**, **Aider**, or any headless CLI agent.
327
+
OpencodeClaw works with [OpenCode](https://opencode.ai), which routes to every major AI provider. You can also swap in **Claude Code**, **Aider**, or any headless CLI agent.
300
328
301
329
| Provider | Models | Notes |
302
330
|---|---|---|
@@ -312,7 +340,7 @@ HPC-Relay works with [OpenCode](https://opencode.ai), which routes to every majo
312
340
313
341
## HPC Best Practices
314
342
315
-
Many HPC clusters **prohibit heavy computation on login nodes**. HPC-Relay is designed with this in mind:
343
+
Many HPC clusters **prohibit heavy computation on login nodes**. OpencodeClaw is designed with this in mind:
316
344
317
345
### Recommended Workflow
318
346
@@ -345,7 +373,7 @@ Then:
345
373
|`rclone sync` (lightweight) | Multi-core jobs |
346
374
| AI agent (ephemeral, short-lived) | Long-running processes |
347
375
348
-
> HPC-Relay's daemonless architecture means the AI agent spins up, answers, and terminates -- no idle processes consuming shared resources.
376
+
> OpencodeClaw's daemonless architecture means the AI agent spins up, answers, and terminates -- no idle processes consuming shared resources.
<metaproperty="og:title" content="HPC-Relay -- AI on Your HPC, WSL, or macOS, From Your Phone">
7
+
<metaproperty="og:title" content="OpencodeClaw -- AI on Your HPC, WSL, or macOS, From Your Phone">
8
8
<metaproperty="og:description"
9
9
content="Control AI coding agents on HPC clusters, WSL setups, or lab macOS machines via Telegram. Transfer files, switch models, manage sessions. No VPN needed.">
<pclass="section-sub reveal">Everything you can do from Telegram. Supports per-chat workspace/session isolation via env config.</p>
1119
+
<pclass="section-sub reveal">Everything you can do from Telegram. Supports per-chat workspace/session isolation via env config, plus optional low-latency local voice transcription with faster-whisper.</p>
1120
1120
<divclass="cmd-grid">
1121
1121
1122
1122
<divclass="cmd-card reveal">
1123
1123
<divclass="cmd-label">AI Prompt</div>
1124
1124
<divclass="cmd-syntax">just type naturally</div>
1125
-
<divclass="cmd-desc">Any message is sent as a prompt to the AI agent on HPC.</div>
1125
+
<divclass="cmd-desc">Any text message is sent as a prompt to the AI agent on HPC. Voice/audio messages can also be received and, optionally, pre-transcribed locally with faster-whisper before reaching the agent.</div>
<divclass="cmd-desc">Download all matching files. Supports glob patterns.</div>
1162
1162
</div>
1163
1163
1164
+
<divclass="cmd-card reveal">
1165
+
<divclass="cmd-label">Voice Note / Audio</div>
1166
+
<divclass="cmd-syntax">send a Telegram voice note</div>
1167
+
<divclass="cmd-desc">Receive Telegram voice/audio messages. For best latency, enable optional local <strong>faster-whisper</strong> transcription so the relay converts speech to text before passing it to the agent.</div>
0 commit comments