Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new sample application (go-memory-load-mysql) that provides a MySQL-backed HTTP API (customers/products/orders/analytics/large-payload endpoints) plus a k6 scenario to load test it.
Changes:
- Introduces a MySQL-backed store layer and HTTP API server for order-style workflows and large payload operations.
- Adds containerization via Dockerfile + docker-compose to run MySQL, the API, and k6 together.
- Adds a k6 load test scenario and a Keploy config file for recording/replay.
Reviewed changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| go-memory-load-mysql/loadtest/scenario.js | k6 scenarios, thresholds, and workload generation for mixed API + large payload cycles |
| go-memory-load-mysql/keploy.yml | Keploy config for the sample app |
| go-memory-load-mysql/internal/store/store.go | MySQL-backed business operations (customers/products/orders/search/top-products/large-payloads) |
| go-memory-load-mysql/internal/store/models.go | Request/response models for the API |
| go-memory-load-mysql/internal/httpapi/server.go | HTTP routing, handlers, JSON helpers, and logging/recovery middleware |
| go-memory-load-mysql/internal/database/mysql.go | MySQL connection helper and runtime schema creation |
| go-memory-load-mysql/internal/config/config.go | Environment-based runtime configuration |
| go-memory-load-mysql/go.mod | New Go module definition and dependencies |
| go-memory-load-mysql/go.sum | Dependency checksums |
| go-memory-load-mysql/docker-compose.yml | Compose setup for MySQL + API + k6 loadtest profile |
| go-memory-load-mysql/cmd/api/main.go | API server entrypoint wiring config, DB, schema, and HTTP server |
| go-memory-load-mysql/Dockerfile | Multi-stage build for the API container |
| go-memory-load-mysql/.env.example | Example env vars for local runs |
| go-memory-load-mysql/.dockerignore | Docker build context exclusions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| function parseFloatEnv(name, fallback) { | ||
| const value = parseFloat(__ENV[name] || ''); | ||
| return Number.isFinite(value) && value > 0 ? value : fallback; |
There was a problem hiding this comment.
parseFloatEnv only accepts values > 0, which prevents setting thresholds like THRESHOLD_HTTP_FAILED_RATE=0 (a valid value). Consider allowing 0 (e.g., value >= 0) and possibly capping rates to <= 1 for failed-rate thresholds.
| function parseFloatEnv(name, fallback) { | |
| const value = parseFloat(__ENV[name] || ''); | |
| return Number.isFinite(value) && value > 0 ? value : fallback; | |
| function parseFloatEnv(name, fallback, maxValue = Number.POSITIVE_INFINITY) { | |
| const value = parseFloat(__ENV[name] || ''); | |
| return Number.isFinite(value) && value >= 0 && value <= maxValue ? value : fallback; |
| func newID() string { | ||
| // Generate a UUID v4-style string using random bytes from crypto/sha256 as a seed | ||
| // fallback: use time + rand; for a load-test, a simple unique ID is sufficient. | ||
| raw := sha256.Sum256([]byte(fmt.Sprintf("%d", time.Now().UnixNano()))) | ||
| b := raw[:] | ||
| // Format as UUID v4 | ||
| b[6] = (b[6] & 0x0f) | 0x40 | ||
| b[8] = (b[8] & 0x3f) | 0x80 | ||
| return fmt.Sprintf("%08x-%04x-%04x-%04x-%12x", | ||
| b[0:4], b[4:6], b[6:8], b[8:10], b[10:16]) | ||
| } |
There was a problem hiding this comment.
newID() claims to generate a UUID v4-style ID, but it derives bytes from sha256(time.Now().UnixNano()), which is deterministic and can collide under high concurrency (leading to duplicate key errors). Prefer generating real random bytes (e.g., crypto/rand) or using a UUID library to ensure uniqueness.
| rowsAffected, _ := result.RowsAffected() | ||
| if rowsAffected == 0 { | ||
| // Either product not found or insufficient inventory. | ||
| var exists int | ||
| checkErr := tx.QueryRowContext(ctx, `SELECT 1 FROM products WHERE id = ? LIMIT 1`, input.ProductID).Scan(&exists) | ||
| if errors.Is(checkErr, sql.ErrNoRows) { | ||
| return Order{}, fmt.Errorf("%w: product %s", ErrNotFound, input.ProductID) | ||
| } | ||
| return Order{}, fmt.Errorf("%w: product %s", ErrInsufficientInventory, input.ProductID) | ||
| } |
There was a problem hiding this comment.
RowsAffected() errors are ignored (rowsAffected, _ := ...). Also, if the follow-up SELECT 1 ... fails with an error other than sql.ErrNoRows, the code currently returns ErrInsufficientInventory, masking the real DB error. Handle the RowsAffected() error and propagate unexpected checkErr values instead of treating them as insufficient inventory.
| // Open creates a *sql.DB, verifies connectivity with retries, and applies the | ||
| // runtime schema. It returns the open DB handle; the caller must call db.Close(). |
There was a problem hiding this comment.
The doc comment for Open says it "applies the runtime schema", but the function only opens the DB and waits for it to be ready (schema is applied later via EnsureRuntimeSchema). Update the comment to match behavior to avoid misleading callers.
| // Open creates a *sql.DB, verifies connectivity with retries, and applies the | |
| // runtime schema. It returns the open DB handle; the caller must call db.Close(). | |
| // Open creates a *sql.DB, configures it, and verifies connectivity with retries. | |
| // It returns the open DB handle; the caller must call db.Close(). Apply the | |
| // runtime schema separately with EnsureRuntimeSchema. |
| // Retry loop — MySQL can take a few seconds to become ready. | ||
| const maxAttempts = 20 | ||
| for attempt := 1; attempt <= maxAttempts; attempt++ { | ||
| if pingErr := db.PingContext(ctx); pingErr == nil { | ||
| break | ||
| } else if attempt == maxAttempts { | ||
| db.Close() | ||
| return nil, fmt.Errorf("mysql did not become ready after %d attempts: %w", maxAttempts, pingErr) | ||
| } | ||
| select { | ||
| case <-ctx.Done(): | ||
| db.Close() | ||
| return nil, ctx.Err() | ||
| case <-time.After(2 * time.Second): | ||
| } |
There was a problem hiding this comment.
Open() calls db.PingContext(ctx) directly in the retry loop without a per-attempt timeout; if a ping blocks, a single attempt can stall until the parent context is canceled. Consider using a short context.WithTimeout per attempt (similar to the Postgres sample) and canceling it each loop.
| const bootstrapOrders = []; | ||
| for (let i = 0; i < 40; i += 1) { | ||
| const customer = randomItem(bootstrapCustomers); | ||
| const order = createOrder(customer.id, bootstrapProducts); | ||
| if (order) { |
There was a problem hiding this comment.
setup() assumes at least one customer/product was created successfully. If all bootstrap creates fail, randomItem(bootstrapCustomers) (and later randomItem(bootstrapProducts)) can return undefined and customer.id will throw. Add a guard to ensure required bootstrap data exists (or retry until a minimum is met / abort the test with a clear error).
| payloadSizeBytes := len([]byte(req.Payload)) | ||
| if payloadSizeBytes > maxLargePayloadBytes { | ||
| return LargePayloadRecord{}, fmt.Errorf( | ||
| "%w: payload exceeds %d bytes (%d MiB) limit", | ||
| ErrValidation, maxLargePayloadBytes, maxLargePayloadBytes/(1024*1024), | ||
| ) | ||
| } | ||
|
|
||
| checksum := sha256.Sum256([]byte(req.Payload)) | ||
| record := LargePayloadRecord{ | ||
| ID: newID(), | ||
| Name: req.Name, | ||
| ContentType: req.ContentType, | ||
| PayloadSizeBytes: payloadSizeBytes, | ||
| SHA256: hex.EncodeToString(checksum[:]), | ||
| CreatedAt: time.Now().UTC(), |
There was a problem hiding this comment.
CreateLargePayload converts req.Payload to []byte twice (once for len([]byte(...)) and again for sha256.Sum256([]byte(...))), which duplicates allocations for multi-MiB payloads. Convert once and reuse the byte slice for both length and checksum.
| db.Close() | ||
| return nil, fmt.Errorf("mysql did not become ready after %d attempts: %w", maxAttempts, pingErr) | ||
| } | ||
| select { | ||
| case <-ctx.Done(): | ||
| db.Close() |
There was a problem hiding this comment.
Several db.Close() calls ignore the returned error. With errcheck enabled in this repo, prefer defer db.Close() //nolint:errcheck or _ = db.Close() (and similarly in the retry failure paths) to avoid lint failures and make intent explicit.
| db.Close() | |
| return nil, fmt.Errorf("mysql did not become ready after %d attempts: %w", maxAttempts, pingErr) | |
| } | |
| select { | |
| case <-ctx.Done(): | |
| db.Close() | |
| _ = db.Close() | |
| return nil, fmt.Errorf("mysql did not become ready after %d attempts: %w", maxAttempts, pingErr) | |
| } | |
| select { | |
| case <-ctx.Done(): | |
| _ = db.Close() |
| logger.Error("connect mysql", "error", err) | ||
| os.Exit(1) | ||
| } | ||
| defer db.Close() |
There was a problem hiding this comment.
defer db.Close() ignores the returned error. Other samples in this repo annotate this with //nolint:errcheck (or use _ = db.Close()), which avoids errcheck lint failures and makes the choice explicit.
| defer db.Close() | |
| defer func() { | |
| _ = db.Close() | |
| }() |
| s.logger.Error("request failed", "error", err) | ||
| } |
There was a problem hiding this comment.
The error log request failed doesn’t provide any actionable next step or request context. Consider logging method/path (and optionally a correlation id) and adding guidance like "check DB connectivity" / "check request payload" depending on the error class.
A sample MySql app for k6 load test