Skip to content

Commit 455ad39

Browse files
committed
admin: drop migration side-effect from describe + tighten list doc
Two findings from Gemini's third review pass on PR #633. - adapter/dynamodb_admin.go: AdminDescribeTable used to invoke ensureLegacyTableMigration on every call, which writes to the cluster (Raft-coordinated key-encoding migration) as a side effect of a read-only dashboard endpoint. Drop the call. The admin describe is now strictly a snapshot read; legacy-format tables migrate lazily on the next SigV4 read or write of the same table, which is the existing behaviour for everything else that touches that path. - internal/admin/dynamo_handler.go: strengthen the handleList doc comment to spell out the worst-case memory bound (255 B per name × 10k tables ≈ 2.5 MiB) and to call out that this matches the SigV4 listTables path which has shipped fine. Streaming the metadata scan is a separate adapter-level refactor — bolting one on top of the materialised slice would be cosmetic.
1 parent b2dec7a commit 455ad39

2 files changed

Lines changed: 28 additions & 14 deletions

File tree

adapter/dynamodb_admin.go

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,18 @@ func (d *DynamoDBServer) AdminListTables(ctx context.Context) ([]string, error)
3838
// (result, present, error) lets admin callers distinguish a genuine
3939
// "not found" from a storage error without sniffing sentinels: when
4040
// the table is missing the function returns (nil, false, nil).
41+
//
42+
// Unlike the SigV4 describeTable handler, AdminDescribeTable does
43+
// NOT invoke ensureLegacyTableMigration. The admin dashboard is a
44+
// strictly read-only surface (Gemini medium review on PR #633), so
45+
// triggering Raft-coordinated key-encoding migrations as a side
46+
// effect of routine polling would (a) violate the read-only
47+
// contract and (b) cause every dashboard refresh to write to the
48+
// cluster. Migration still runs lazily on the next SigV4 read or
49+
// write of the same table — the schema we return here is just a
50+
// snapshot for display, not a guarantee that the table is
51+
// up-to-date for serving.
4152
func (d *DynamoDBServer) AdminDescribeTable(ctx context.Context, name string) (*AdminTableSummary, bool, error) {
42-
if err := d.ensureLegacyTableMigration(ctx, name); err != nil {
43-
return nil, false, err
44-
}
4553
schema, exists, err := d.loadTableSchema(ctx, name)
4654
if err != nil {
4755
return nil, false, err

internal/admin/dynamo_handler.go

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -131,17 +131,23 @@ func (h *DynamoHandler) handleList(w http.ResponseWriter, r *http.Request) {
131131
return
132132
}
133133

134-
// AdminListTables materialises the full table-name list before
135-
// paginate-and-slice. The adapter's listTableNames already
136-
// scans the entire metadata prefix in one shot for the SigV4
137-
// listTables path, so streaming here would not change the
138-
// adapter's memory profile; the dashboard's bounded `limit`
139-
// (default 100, hard max 1000) and the lex-sorted name list
140-
// keep the per-request slice small enough that this is the
141-
// pragmatic shape rather than the limiting factor. If a future
142-
// admin-cluster scale ever changes that calculus, the fix is to
143-
// teach the adapter to stream, then plumb that through here —
144-
// not to add a streaming layer on top of the materialised list.
134+
// AdminListTables returns the full lex-sorted name list that
135+
// the adapter's metadata prefix scan produces; we then slice
136+
// to the requested page. The adapter's listTableNames already
137+
// materialises the same list for the SigV4 listTables path
138+
// (adapter/dynamodb.go:1146), which has been in production
139+
// since DynamoDB-compat shipped — admin's memory profile is
140+
// strictly the SigV4 path's, not a regression on top of it.
141+
//
142+
// Worst-case bound: a Dynamo table name caps at 255 bytes, so
143+
// 1k tables ≈ 256 KiB and 10k tables ≈ 2.5 MiB of name
144+
// strings on the heap during a single list call. That is well
145+
// inside the per-request budget the admin listener targets.
146+
// Beyond that scale the right fix is to teach the adapter to
147+
// stream the metadata scan via a callback (and plumb it
148+
// through here), not to bolt a streaming layer on top of the
149+
// already-materialised slice. Tracked separately; this
150+
// endpoint is not the limiting factor.
145151
names, err := h.source.AdminListTables(r.Context())
146152
if err != nil {
147153
h.logger.LogAttrs(r.Context(), slog.LevelError, "admin dynamo list tables failed",

0 commit comments

Comments
 (0)