You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Replace restaurants/Python-script narrative with real bookings/listings + monitor web app
- Document the four monitor-app tabs (Topology, Bookings, Vector Search, Load)
- Replace 'big red button' with the actual Promote to primary + Rebuild replica flow
- Drop references to non-existent scripts (query_examples.py, vector_restaurants_demo.py, generate_restaurants.py)
- Update repo structure to match what's actually in the tree
- Document start.ps1 as the one-shot launcher
- Add load-data.sh remote-seed usage
- Link to demo/01-local..06-multicloud runbooks
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
-[Visual Studio Code](https://code.visualstudio.com/)
36
-
-[DocumentDB for VS Code Extension](https://marketplace.visualstudio.com/items?itemName=ms-documentdb.vscode-documentdb)
37
-
- Python 3.11+ (for demo scripts)
43
+
-[Docker Desktop](https://www.docker.com/) — local stack
44
+
-[Visual Studio Code](https://code.visualstudio.com/) + [DocumentDB for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-documentdb)
This sets the env vars, launches the Node server on `http://localhost:5174`,
85
+
and opens the monitor UI plus both Grafana dashboards. Flags:
86
+
`-NoGrafana` skips the Grafana tabs, `-NoBrowser` skips opening tabs entirely.
90
87
91
-
Live web app with a "big red button" that promotes the EKS replica to primary while writes flip in real time. See [`app/failover-demo/README.md`](app/failover-demo/README.md).
88
+
The UI has four tabs:
92
89
93
-
### Data Generation
90
+
| Tab | What it does |
91
+
| --- | --- |
92
+
|**Topology**| Live status of every cluster + per-replica WAL lag from `pg_stat_replication`. **Promote to primary** button triggers a one-click cross-cloud failover via `kubectl documentdb promote`. **Rebuild replica** wipes the replica's PVCs and rebuilds via `pg_basebackup` — with a background watcher that auto-recovers from a known cert-mismatch bug in the operator. |
93
+
|**Bookings**| Direct Mongo reads/writes against the **current primary** for `bookingsdb.bookings`. Each insert is followed by a wait-for-replication check so you can see the row appear on the replica with a measured lag. |
94
+
|**Vector Search**|`$vectorSearch` queries over `bookingsdb.listings` against the **local Docker** instance, using the HNSW index. Try queries like *"cozy mountain cabin with hot tub"*. |
95
+
|**Load**| A travel-booking workload generator: 80% browse / 15% detail / 4% insert / 1% confirm against the current primary. RPS slider with presets (idle / morning / peak / Black Friday) and an in-flight semaphore to keep client and server pools healthy. |
94
96
95
-
Generate fresh restaurant data with configurable hot clusters:
97
+
First-time setup of the monitor:
96
98
97
-
```bash
98
-
python scripts/generate_restaurants.py --count 5000 --hot-count 1000 --hot-cuisine Italian
99
+
```powershell
100
+
cd app\monitor-app
101
+
npm install
99
102
```
100
103
101
-
## Multi-Cloud Deployment
104
+
## Multi-cloud deployment
102
105
103
-
> **Shell on Windows**: run these scripts from **Git Bash**. WSL has DNS issues with `login.microsoftonline.com`. PowerShell is fine for the docker compose / Python steps above, but the deploy/cleanup scripts call bash directly.
106
+
> **Shell on Windows:** run the deploy/cleanup scripts from **Git Bash**.
107
+
> WSL has DNS issues with `login.microsoftonline.com`. PowerShell is fine for
108
+
> `docker compose`, `npm`, and the CLI tools above.
104
109
105
-
The talk now uses the upstream `documentdb-playground/multi-cloud-deployment`
106
-
setup (vendored into [`infra/multi-cloud/`](infra/multi-cloud/README.md)) which
107
-
gives **real cross-cloud replication** instead of two unrelated clusters:
110
+
The talk uses the upstream `documentdb-playground/multi-cloud-deployment`
111
+
setup (vendored into [`infra/multi-cloud/`](infra/multi-cloud/README.md)),
112
+
which gives **real cross-cloud replication** instead of two unrelated
113
+
clusters:
108
114
109
-
-**AKS Fleet hub** in eastus2 (KubeFleet control plane)
115
+
-**AKS Fleet hub** in eastus2 (KubeFleet control plane for the DocumentDB CR)
110
116
-**AKS member** in eastus2 (DocumentDB primary by default)
111
117
-**EKS member** in us-west-2 (WAL replica)
112
118
-**Istio multi-cluster mesh** with shared root CA + east-west gateways for
113
119
cross-cloud service discovery and mTLS-encrypted WAL replication
120
+
-**DocumentDB operator** on top of CloudNativePG, deployed to all members
121
+
-**kube-prometheus-stack** on each member, exposing Grafana with a shared
0 commit comments