Skip to content

Commit ebb6656

Browse files
feat: add zensical docs site with GitHub Pages deployment
Set up a zensical-powered documentation site mirroring the bodhi-docs pattern, with GHA workflow for automatic deployment to GitHub Pages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 3129ad6 commit ebb6656

9 files changed

Lines changed: 413 additions & 0 deletions

File tree

.github/workflows/deploy.yml

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
name: Deploy docs
2+
3+
on:
4+
push:
5+
branches: [main]
6+
workflow_dispatch:
7+
8+
permissions:
9+
contents: read
10+
pages: write
11+
id-token: write
12+
13+
jobs:
14+
deploy:
15+
environment:
16+
name: github-pages
17+
url: ${{ steps.deployment.outputs.page_url }}
18+
runs-on: ubuntu-latest
19+
steps:
20+
- uses: actions/checkout@v4
21+
22+
- uses: prefix-dev/setup-pixi@v0.8.1
23+
with:
24+
pixi-version: v0.45.0
25+
26+
- name: Build
27+
run: pixi run deploy
28+
29+
- uses: actions/upload-pages-artifact@v4
30+
with:
31+
path: site
32+
33+
- uses: actions/deploy-pages@v4
34+
id: deployment

.gitignore

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,13 @@
11
logs/
22
slurm-*.out
33
slurm-*.err
4+
5+
# Pixi
6+
.pixi/
7+
8+
# Zensical build output
9+
site/
10+
11+
# Python
12+
__pycache__/
13+
*.pyc

docs/configuration.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# Configuration
2+
3+
Resources are configured via SLURM directives in `positron-remote.sh`.
4+
5+
## Default Settings
6+
7+
| Setting | Alpine | amc-bodhi |
8+
|---------|--------|-----------|
9+
| `--time` | 8 hours | 8 hours |
10+
| `--mem` | 24 GB | 24 GB |
11+
| `--partition` | `amilan` | `positron` |
12+
| `--qos` | `normal` | `positron` |
13+
| `--cpus-per-task` | 4 | 8 |
14+
15+
## Customizing Resources
16+
17+
Edit `positron-remote.sh` and modify the `get_cluster_config()` function to change the defaults for your cluster. The relevant variables are:
18+
19+
- `PARTITION` — SLURM partition name
20+
- `QOS` — Quality of service tier
21+
- `MEM` — Memory allocation
22+
- `CPUS` — Number of CPU cores
23+
24+
The SLURM header directives (`#SBATCH`) set the base defaults, and the cluster-specific overrides are passed via `sbatch` CLI arguments at submission time.
25+
26+
## Installation
27+
28+
You can install the script to `~/.local/bin` for easy access:
29+
30+
```bash
31+
make install
32+
```
33+
34+
To uninstall:
35+
36+
```bash
37+
make uninstall
38+
```
39+
40+
## Resources
41+
42+
- [Alpine Documentation](https://curc.readthedocs.io/en/latest/compute/alpine.html)
43+
- [Positron Documentation](https://github.com/posit-dev/positron)
44+
- [Positron Remote SSH Documentation](https://positron.posit.co/remote-ssh.html)
45+
- [CU Research Computing](https://curc.readthedocs.io/)

docs/index.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Positron Remote SSH
2+
3+
Launch [Positron](https://github.com/posit-dev/positron) on **Alpine** (CU Boulder) or **amc-bodhi** (CU Anschutz) HPC clusters with a single command.
4+
5+
---
6+
7+
## How It Works
8+
9+
The script allocates a compute node on your HPC cluster via SLURM and provides SSH connection instructions for remote development with Positron. It uses a **ProxyJump** SSH pattern to connect through the login node to your allocated compute node.
10+
11+
```
12+
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
13+
│ Your Machine│──SSH──▶│ Login Node │──SSH──▶│ Compute Node │
14+
│ (Positron) │ │ (gateway) │ │ (workspace) │
15+
└──────────────┘ └──────────────┘ └──────────────┘
16+
```
17+
18+
The workflow is three steps:
19+
20+
1. **Setup** — copy SSH keys and configure scratch storage (once per cluster)
21+
2. **Submit** — run the script on the cluster to allocate a compute node
22+
3. **Connect** — paste the SSH config into Positron and connect
23+
24+
## Supported Clusters
25+
26+
| Cluster | Institution | Partition | Memory | CPUs | VPN Required |
27+
|---------|-------------|-----------|--------|------|--------------|
28+
| **Alpine** | CU Boulder Research Computing | `amilan` | 24 GB | 4 | No |
29+
| **amc-bodhi** | CU Anschutz Medical Campus | `positron` | 24 GB | 8 | Yes |
30+
31+
## Prerequisites
32+
33+
- Access to Alpine or amc-bodhi HPC cluster
34+
- [Positron](https://github.com/posit-dev/positron) installed on your local machine
35+
- SSH key configured for cluster access
36+
- **amc-bodhi only**: Connected to AMC VPN
37+
38+
## Get Started
39+
40+
Head to the [Quick Start](quickstart.md) guide to get up and running, or run [Setup](setup.md) if this is your first time.

docs/quickstart.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Quick Start
2+
3+
!!! tip "First time?"
4+
Run [Setup](setup.md) before your first use.
5+
6+
## 1. Submit the Job
7+
8+
SSH into the cluster and run:
9+
10+
=== "Alpine"
11+
12+
```bash
13+
./positron-remote.sh alpine
14+
```
15+
16+
=== "amc-bodhi"
17+
18+
```bash
19+
./positron-remote.sh bodhi
20+
```
21+
22+
!!! note "amc-bodhi"
23+
You must be connected to the **AMC VPN** before submitting.
24+
25+
## 2. Check Job Status
26+
27+
```bash
28+
squeue -u $USER
29+
```
30+
31+
Wait until your job is in the **R** (running) state.
32+
33+
## 3. View Connection Instructions
34+
35+
```bash
36+
cat logs/positron-<JOB_ID>.out
37+
```
38+
39+
Replace `<JOB_ID>` with your actual job ID from `squeue`.
40+
41+
## 4. Connect from Positron
42+
43+
1. Open Positron on your **local machine**
44+
2. Press ++cmd+shift+p++ (Mac) or ++ctrl+shift+p++ (Windows/Linux)
45+
3. Select **Remote-SSH: Open SSH Configuration File**
46+
4. Paste the SSH config from the log file and save:
47+
48+
=== "Alpine"
49+
50+
```
51+
Host positron-alpine-<JOB_ID>
52+
HostName <compute-node>
53+
User <your-username>
54+
ProxyJump <your-username>@login-ci.rc.colorado.edu
55+
ForwardAgent yes
56+
ServerAliveInterval 60
57+
ServerAliveCountMax 3
58+
```
59+
60+
=== "amc-bodhi"
61+
62+
```
63+
Host positron-bodhi-<JOB_ID>
64+
HostName <compute-node>
65+
User <your-username>
66+
ProxyJump <your-username>@amc-bodhi.ucdenver.pvt
67+
ForwardAgent yes
68+
ServerAliveInterval 60
69+
ServerAliveCountMax 3
70+
```
71+
72+
5. Select **Remote-SSH: Connect to Host**
73+
6. Choose your `positron-{alpine|bodhi}-<JOB_ID>` entry
74+
7. Positron will install its server components on the remote node automatically
75+
76+
## 5. When Finished
77+
78+
Always cancel your job to free resources:
79+
80+
```bash
81+
scancel <JOB_ID>
82+
```
83+
84+
!!! warning
85+
The compute node allocation runs for the full time requested (8 hours by default) or until you cancel it. Always `scancel` when done.

docs/setup.md

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Setup
2+
3+
One-time setup to configure SSH access and scratch storage.
4+
5+
## Run Setup
6+
7+
From your **local machine**, run:
8+
9+
=== "Alpine"
10+
11+
```bash
12+
./positron-remote.sh setup alpine
13+
```
14+
15+
=== "amc-bodhi"
16+
17+
```bash
18+
./positron-remote.sh setup bodhi
19+
```
20+
21+
!!! note "amc-bodhi"
22+
You must be connected to the **AMC VPN** before running setup for amc-bodhi.
23+
24+
Setup will:
25+
26+
1. Copy your local SSH public key to the cluster (via `ssh-copy-id`)
27+
2. Create a Positron Server symlink on scratch storage (Alpine only)
28+
3. Print recommended Positron settings
29+
30+
## Recommended Positron Settings
31+
32+
By default, R and Python sessions terminate when Positron disconnects. On HPC, brief network interruptions are common and you don't want to lose your session within a running SLURM allocation.
33+
34+
Add this to your Positron `settings.json` (on your **local machine**):
35+
36+
```json
37+
{
38+
"kernelSupervisor.shutdownTimeout": "never"
39+
}
40+
```
41+
42+
This keeps R/Python sessions alive on the remote host so you can reconnect without losing your work.
43+
44+
## Alpine Scratch Storage
45+
46+
!!! warning "Scratch purge policy"
47+
`/scratch/alpine` is purged every **90 days** for files not accessed. If the directory is purged, Positron will automatically reinstall the server when you next connect. You may need to re-run `./positron-remote.sh setup alpine` to recreate the symlink.
48+
49+
The setup command creates a symlink from `~/.positron-server` to `/scratch/alpine/$USER/.positron-server` because Alpine home directories have limited space.
50+
51+
For more details on how Positron Remote-SSH works, see the [Positron Remote SSH documentation](https://positron.posit.co/remote-ssh.html#how-it-works-troubleshooting).

docs/troubleshooting.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Troubleshooting
2+
3+
## Job Won't Start
4+
5+
- Check queue status: `squeue -u $USER`
6+
- Check available resources: `sinfo`
7+
- Verify your account has hours: `curc-quota`
8+
9+
## Can't Connect via SSH
10+
11+
- Ensure SSH config was added to your **local** `~/.ssh/config` (not on the cluster)
12+
- Verify the job is running: `squeue -u $USER`
13+
- Check the log file for the correct hostname and job ID
14+
- Verify your local SSH public key is on the cluster (re-run setup)
15+
16+
## Connection Drops
17+
18+
SSH connections may timeout if idle. The job itself continues running — just reconnect using the same SSH host entry.
19+
20+
The SSH config generated by the script includes `ServerAliveInterval` and `ServerAliveCountMax` to reduce idle timeouts.
21+
22+
## Connection Fails After Updating Positron
23+
24+
The Positron client and server must be **exactly the same version**. If you update Positron on your local machine, the remote `~/.positron-server` may have an old version.
25+
26+
Delete the remote server and reconnect:
27+
28+
=== "Alpine"
29+
30+
```bash
31+
rm -rf /scratch/alpine/${USER}/.positron-server
32+
```
33+
34+
=== "amc-bodhi"
35+
36+
```bash
37+
rm -rf ~/.positron-server
38+
```
39+
40+
Positron will automatically reinstall the correct server version on reconnect.
41+
42+
## Extensions Missing in Remote Session
43+
44+
Extensions installed on your local machine are not automatically available on the remote host. After connecting, install any needed extensions from the Extensions panel — they will be installed on the remote server.
45+
46+
## R Interpreter Not Discovered
47+
48+
R interpreter discovery can be unreliable on remote systems. If you don't see R under "Start Session" (even though Python interpreters appear):
49+
50+
!!! info
51+
If you find you are able to use R versions available in the modules system, please let me know.
52+
53+
1. **Install R through mamba/conda** on the remote system:
54+
55+
```bash
56+
mamba install -c conda-forge r-base
57+
```
58+
59+
2. **Enable conda discovery** in Positron settings (on your local machine):
60+
61+
Search for "Positron R Interpreters Conda Discovery" and enable it, or add to `settings.json`:
62+
63+
```json
64+
{
65+
"positron.r.interpreters.condaDiscovery": true
66+
}
67+
```
68+
69+
3. **Manually trigger interpreter discovery**:
70+
71+
Press ++cmd+shift+p++ (Mac) or ++ctrl+shift+p++ (Windows/Linux) and select **Interpreter: Discover all interpreters**.
72+
73+
!!! note
74+
There are multiple discussions in the [Positron repository](https://github.com/posit-dev/positron/discussions) about interpreter discovery issues. Solutions may vary by system configuration.

pixi.toml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
[workspace]
2+
name = "remote-ssh-positron"
3+
version = "0.1.0"
4+
description = "Documentation site for Positron Remote SSH"
5+
channels = ["conda-forge"]
6+
platforms = ["linux-64", "osx-arm64"]
7+
8+
[dependencies]
9+
python = ">=3.11"
10+
zensical = "*"
11+
12+
[tasks]
13+
docs = "zensical serve"
14+
build = "zensical build"
15+
deploy = "zensical build --clean"

0 commit comments

Comments
 (0)