You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,12 +6,22 @@
6
6
7
7
Many enterprise resources live in private networks and are not reachable from serverless compute by default. `dbx-proxy` provides a controlled entry point for [private connectivity to resources in your VPC/Vnet](https://docs.databricks.com/aws/en/security/network/serverless-network-security/pl-to-internal-network).
8
8
9
+

10
+
11
+
Connectivity to your custom resources can be configured via a dedicated Private Endpoint that is connected to a Network Load Balancer (AWS) in your network. From there on you can route traffic accordingly to targets. However, this approach comes with certain limiations for routing of network traffic due to limitations of cloud-provider offerings, e.g. a NLB on AWS does only operate on Layer 4 of the TCP/IP stack, allowing traffic to be routed only by IP/Port. `dbx-proxy` solves this problem by introducing an additional component which receives all traffic from your NLB and takes over the routing logic for individual targets based on your configuration. It is able to operate on Layer 4 & 7 providing greater flexibility for reaching your targets from Databricks Serverless compute.
12
+
13
+
9
14
### What you get
10
15
11
16
-**Forwarding of L4 & L7 network traffic** based on your configuration
12
17
- L4 (TCP): forwarding of plain TCP traffic, e.g. for databases
13
18
- L7 (HTTP) forwarding of HTTP(s) traffic with **SNI-based routing**, e.g. for applications/APIS
14
19
-**Terraform module** ready to use (currently **AWS only**)
20
+
- No TLS termination, only passthrough!
21
+
22
+
### High availability (overview)
23
+
24
+
`dbx-proxy` is placed behind an AWS Network Load Balancer, which spreads connections across the instances in the Auto Scaling Group. Availability depends on how many instances you run and whether your subnets span multiple AZs. See the Terraform module details for configuration and behavior: [High availability (AWS)](terraform/README.md#high-availability-aws).
Copy file name to clipboardExpand all lines: terraform/README.md
+23-6Lines changed: 23 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,11 +27,7 @@ This repository provides a **Terraform module** for deploying `dbx-proxy` across
27
27
- Optional IGW + public subnet + NAT gateway (internet connectivity needed to pull images, etc.!)
28
28
- Route tables + associations
29
29
30
-
---
31
-
32
-
### Architecture diagram (AWS)
33
-
34
-
(to be added)
30
+

35
31
36
32
---
37
33
@@ -77,7 +73,7 @@ These variables define what the proxy should do (listeners, health port, image t
77
73
78
74
| Variable | Type | Default | Description |
79
75
|---|---:|---:|---|
80
-
|`dbx_proxy_image_version`|`string`|`"0.2.0"`| Docker image tag/version of `dbx-proxy` to deploy. |
76
+
|`dbx_proxy_image_version`|`string`|`"0.1.1"`| Docker image tag/version of `dbx-proxy` to deploy. |
81
77
|`dbx_proxy_health_port`|`number`|`8080`| Health port exposed by `dbx-proxy` (HTTP `GET /status`). Also used for NLB target group health checks. |
82
78
|`dbx_proxy_max_connections`|`number`|`null`| Optional HAProxy `maxconn` override. If unset, the AWS module derives a value from vCPU and memory of the selected instance-type. |
83
79
|`dbx_proxy_listener`|`list(object)`|`[]`| Listener configuration (ports/modes/routes/destinations). See **Listener configuration** below. |
@@ -141,6 +137,27 @@ These variables define what the proxy should do (listeners, health port, image t
141
137
142
138
---
143
139
140
+
### High availability (AWS)
141
+
142
+
High availability is driven by the **Auto Scaling Group (ASG)** size and the **subnets/AZs** you provide.
143
+
The module **does not pin instances to a single AZ**; AWS spreads instances across the subnets you pass in `subnet_ids`.
144
+
145
+
Key behaviors:
146
+
-**Multi-instance support**: set `min_capacity` / `max_capacity` to >1 to allow more than one proxy instance.
147
+
-**AZ distribution**: the ASG uses the subnets in `subnet_ids`. If those subnets span multiple AZs, instances are spread across them.
148
+
-**Single-AZ risk**: if `subnet_ids` are all in one AZ, all instances will stay in that AZ.
149
+
-**Bootstrap mode**: when bootstrapping networking, the module creates two private subnets from `subnet_cidrs`; ensure these map to different AZs in your region.
150
+
151
+
Deployment variables that affect HA:
152
+
-`min_capacity`, `max_capacity` (ASG size)
153
+
-`subnet_ids` (which AZs are eligible)
154
+
-`subnet_cidrs` (in `bootstrap` mode, controls how many subnets are created)
155
+
-`enable_nat_gateway` (bootstrap only; affects outbound access, not AZ spread)
156
+
157
+
If you need strict multi-AZ placement guarantees, provide **at least one subnet per AZ** you want to cover and run **>= 2 instances**.
158
+
159
+
---
160
+
144
161
### Listener configuration (deep dive)
145
162
146
163
`dbx_proxy_listener` is a list of listener objects:
0 commit comments