Skip to content

Commit ac26759

Browse files
eeldalyfriedrichg
andauthored
Convert heavy queries from 5xx to 4xx (#7374)
* Convert heavy queries from 5xx to 4xx Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Fix 504 default return, disable retry on querier timeout Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Add tests Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * make check doc Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * make check doc Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * fix QueryStats new max stats Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * docs config schema Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * fix non-constant format string call Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * fix pb.go correct version Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * check protos Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * lint Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Fix scheduler time track Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Add verification to check query stats enabled when this is enabled Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * fix query end time qfe querier inconsistency Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Improve query shard logs on timeout Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Update default times to match cortex's 2min timeout Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Update default times to match cortex's 2min timeout Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Add docs/guide Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * gofmt Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * update changelog Signed-off-by: Essam Eldaly <eeldaly@amazon.com> * Clarify only works on instant/ranged queries Signed-off-by: Essam Eldaly <eeldaly@amazon.com> --------- Signed-off-by: Essam Eldaly <eeldaly@amazon.com> Signed-off-by: Essam Eldaly <60357054+eeldaly@users.noreply.github.com> Signed-off-by: Friedrich Gonzalez <1517449+friedrichg@users.noreply.github.com> Co-authored-by: Friedrich Gonzalez <1517449+friedrichg@users.noreply.github.com>
1 parent f69c583 commit ac26759

24 files changed

+1166
-72
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
## master / unreleased
44
* [FEATURE] Distributor: Add experimental `-distributor.enable-start-timestamp` flag for Prometheus Remote Write 2.0. When enabled, `StartTimestamp (ST)` is ingested. #7371
55
* [FEATURE] Memberlist: Add `-memberlist.cluster-label` and `-memberlist.cluster-label-verification-disabled` to prevent accidental cross-cluster gossip joins and support rolling label rollout. #7385
6+
* [FEATURE] Querier: Add timeout classification to classify query timeouts as 4XX (user error) or 5XX (system error) based on phase timing. When enabled, queries that spend most of their time in PromQL evaluation return `422 Unprocessable Entity` instead of `503 Service Unavailable`. #7374
67
* [ENHANCEMENT] Ingester: Add WAL record metrics to help evaluate the effectiveness of WAL compression type (e.g. snappy, zstd): `cortex_ingester_tsdb_wal_record_part_writes_total`, `cortex_ingester_tsdb_wal_record_parts_bytes_written_total`, and `cortex_ingester_tsdb_wal_record_bytes_saved_total`. #7420
78
* [ENHANCEMENT] Distributor: Introduce dynamic `Symbols` slice capacity pooling. #7398 #7401
89
* [ENHANCEMENT] Metrics Helper: Add native histogram support for aggregating and merging, including dual-format histogram handling that exposes both native and classic bucket formats. #7359

docs/blocks-storage/querier.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -299,6 +299,20 @@ querier:
299299
# mixed block types (parquet and non-parquet) and not querying ingesters.
300300
# CLI flag: -querier.honor-projection-hints
301301
[honor_projection_hints: <boolean> | default = false]
302+
303+
# If true, classify query timeouts as 4XX (user error) or 5XX (system error)
304+
# based on phase timing.
305+
# CLI flag: -querier.timeout-classification-enabled
306+
[timeout_classification_enabled: <boolean> | default = false]
307+
308+
# The total time before the querier proactively cancels a query for timeout
309+
# classification. Set this a few seconds less than the querier timeout.
310+
# CLI flag: -querier.timeout-classification-deadline
311+
[timeout_classification_deadline: <duration> | default = 1m59s]
312+
313+
# Eval time threshold above which a timeout is classified as user error (4XX).
314+
# CLI flag: -querier.timeout-classification-eval-threshold
315+
[timeout_classification_eval_threshold: <duration> | default = 1m30s]
302316
```
303317
304318
### `blocks_storage_config`

docs/configuration/config-file-reference.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4975,6 +4975,20 @@ thanos_engine:
49754975
# types (parquet and non-parquet) and not querying ingesters.
49764976
# CLI flag: -querier.honor-projection-hints
49774977
[honor_projection_hints: <boolean> | default = false]
4978+
4979+
# If true, classify query timeouts as 4XX (user error) or 5XX (system error)
4980+
# based on phase timing.
4981+
# CLI flag: -querier.timeout-classification-enabled
4982+
[timeout_classification_enabled: <boolean> | default = false]
4983+
4984+
# The total time before the querier proactively cancels a query for timeout
4985+
# classification. Set this a few seconds less than the querier timeout.
4986+
# CLI flag: -querier.timeout-classification-deadline
4987+
[timeout_classification_deadline: <duration> | default = 1m59s]
4988+
4989+
# Eval time threshold above which a timeout is classified as user error (4XX).
4990+
# CLI flag: -querier.timeout-classification-eval-threshold
4991+
[timeout_classification_eval_threshold: <duration> | default = 1m30s]
49784992
```
49794993
49804994
### `query_frontend_config`
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# Timeout Classification
2+
3+
Timeout classification lets Cortex distinguish between query timeouts caused by expensive user queries (4XX) and those caused by system issues (5XX). When enabled, queries that spend most of their time in PromQL evaluation are returned as `422 Unprocessable Entity` instead of `503 Service Unavailable`, giving callers a clear signal to simplify the query rather than retry.
4+
5+
## How It Works
6+
7+
When a query (instant/ranged, other apis are unchanged) arrives at the querier, the feature:
8+
9+
1. Subtracts any time the query spent waiting in the scheduler queue from the configured deadline.
10+
2. Sets a proactive context timeout using the remaining budget, so the querier cancels the query slightly before the PromQL engine's own timeout fires.
11+
3. On timeout, inspects phase timings (storage fetch time vs. total time) to compute eval time.
12+
4. If eval time exceeds the configured threshold, the timeout is classified as a user error (4XX). Otherwise it remains a system error (5XX).
13+
14+
This means expensive queries that burn their budget in PromQL evaluation get a `422`, while other queries remain a `5XX`.
15+
16+
* Note that due to different query shards not returning at the same time, the first returned timed out shard gets to decide whether the query will be converted to 4XX.
17+
18+
## Configuration
19+
20+
Enable the feature and set the three related flags:
21+
22+
```yaml
23+
querier:
24+
timeout_classification_enabled: true
25+
timeout_classification_deadline: 1m59s
26+
timeout_classification_eval_threshold: 1m30s
27+
```
28+
29+
| Flag | Default | Description |
30+
|---|---|---|
31+
| `timeout_classification_enabled` | `false` | Enable 5XX-to-4XX conversion based on phase timing. |
32+
| `timeout_classification_deadline` | `1m59s` | Proactive cancellation deadline. Set this a few seconds less than the querier timeout. |
33+
| `timeout_classification_eval_threshold` | `1m30s` | Eval time above which a timeout is classified as user error (4XX). Must be ≤ the deadline. |
34+
35+
### Constraints
36+
37+
- `timeout_classification_deadline` must be positive and strictly less than `querier.timeout`.
38+
- `timeout_classification_eval_threshold` must be positive and ≤ `timeout_classification_deadline`.
39+
- Query stats must be enabled (`query_stats_enabled: true` on the frontend handler) for classification to work.
40+
41+
## Tuning
42+
43+
- The deadline should be close to but below the querier timeout so the proactive cancellation fires first. A gap of 1–2 seconds is typical.
44+
- The eval threshold controls sensitivity. A lower threshold classifies more timeouts as user errors; a higher threshold is more conservative. Start with the default and adjust based on your workload.
45+
- Monitor the `decision` field in the timeout classification log line (`query shard timed out with classification`) to see how queries are being classified before enabling the conversion.
46+
47+
## Observability
48+
49+
When a query times out and query stats is active, the querier emits a warning-level log line containing:
50+
51+
- `queue_wait_time` — time spent in the scheduler queue
52+
- `query_storage_wall_time` — time spent fetching data from storage
53+
- `eval_time` — computed as `total_time - query_storage_wall_time`
54+
- `decision` — `0` for 5XX (system), `1` for 4XX (user)
55+
- `conversion_enabled` — whether the status code conversion is active
56+
57+
These fields are logged regardless of whether conversion is enabled, so you can observe classification behavior in dry-run mode by setting `timeout_classification_enabled: false` and reviewing the logs.

pkg/api/handlers.go

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,11 @@ func NewQuerierHandler(
286286
legacyPromRouter := route.New().WithPrefix(path.Join(legacyPrefix, "/api/v1"))
287287
api.Register(legacyPromRouter)
288288

289-
queryAPI := queryapi.NewQueryAPI(engine, translateSampleAndChunkQueryable, statsRenderer, logger, codecs, corsOrigin)
289+
queryAPI := queryapi.NewQueryAPI(engine, translateSampleAndChunkQueryable, statsRenderer, logger, codecs, corsOrigin, stats.PhaseTrackerConfig{
290+
TotalTimeout: querierCfg.TimeoutClassificationDeadline,
291+
EvalTimeThreshold: querierCfg.TimeoutClassificationEvalThreshold,
292+
Enabled: querierCfg.TimeoutClassificationEnabled,
293+
})
290294

291295
requestTracker := request_tracker.NewRequestTracker(querierCfg.ActiveQueryTrackerDir, "apis.active", querierCfg.MaxConcurrent, util_log.GoKitLogToSlog(logger))
292296
var apiHandler http.Handler

pkg/api/queryapi/query_api.go

Lines changed: 144 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -21,18 +21,21 @@ import (
2121
"github.com/cortexproject/cortex/pkg/distributed_execution"
2222
"github.com/cortexproject/cortex/pkg/engine"
2323
"github.com/cortexproject/cortex/pkg/querier"
24+
"github.com/cortexproject/cortex/pkg/querier/stats"
2425
"github.com/cortexproject/cortex/pkg/util"
2526
"github.com/cortexproject/cortex/pkg/util/api"
27+
"github.com/cortexproject/cortex/pkg/util/requestmeta"
2628
)
2729

2830
type QueryAPI struct {
29-
queryable storage.SampleAndChunkQueryable
30-
queryEngine engine.QueryEngine
31-
now func() time.Time
32-
statsRenderer v1.StatsRenderer
33-
logger log.Logger
34-
codecs []v1.Codec
35-
CORSOrigin *regexp.Regexp
31+
queryable storage.SampleAndChunkQueryable
32+
queryEngine engine.QueryEngine
33+
now func() time.Time
34+
statsRenderer v1.StatsRenderer
35+
logger log.Logger
36+
codecs []v1.Codec
37+
CORSOrigin *regexp.Regexp
38+
timeoutClassification stats.PhaseTrackerConfig
3639
}
3740

3841
func NewQueryAPI(
@@ -42,15 +45,17 @@ func NewQueryAPI(
4245
logger log.Logger,
4346
codecs []v1.Codec,
4447
CORSOrigin *regexp.Regexp,
48+
timeoutClassification stats.PhaseTrackerConfig,
4549
) *QueryAPI {
4650
return &QueryAPI{
47-
queryEngine: qe,
48-
queryable: q,
49-
statsRenderer: statsRenderer,
50-
logger: logger,
51-
codecs: codecs,
52-
CORSOrigin: CORSOrigin,
53-
now: time.Now,
51+
queryEngine: qe,
52+
queryable: q,
53+
statsRenderer: statsRenderer,
54+
logger: logger,
55+
codecs: codecs,
56+
CORSOrigin: CORSOrigin,
57+
now: time.Now,
58+
timeoutClassification: timeoutClassification,
5459
}
5560
}
5661

@@ -84,6 +89,11 @@ func (q *QueryAPI) RangeQueryHandler(r *http.Request) (result apiFuncResult) {
8489
}
8590

8691
ctx := r.Context()
92+
93+
// Always record query start time for phase tracking, regardless of feature flag.
94+
queryStats := stats.FromContext(ctx)
95+
queryStats.SetQueryStart(time.Now())
96+
8797
if to := r.FormValue("timeout"); to != "" {
8898
var cancel context.CancelFunc
8999
timeout, err := util.ParseDurationMs(to)
@@ -95,6 +105,15 @@ func (q *QueryAPI) RangeQueryHandler(r *http.Request) (result apiFuncResult) {
95105
defer cancel()
96106
}
97107

108+
cfg := q.timeoutClassification
109+
ctx, cancel, earlyResult := applyTimeoutClassification(ctx, queryStats, cfg)
110+
if cancel != nil {
111+
defer cancel()
112+
}
113+
if earlyResult != nil {
114+
return *earlyResult
115+
}
116+
98117
opts, err := extractQueryOpts(r)
99118
if err != nil {
100119
return apiFuncResult{nil, &apiError{errorBadData, err}, nil, nil}
@@ -138,6 +157,13 @@ func (q *QueryAPI) RangeQueryHandler(r *http.Request) (result apiFuncResult) {
138157

139158
res := qry.Exec(ctx)
140159
if res.Err != nil {
160+
// If the context was cancelled/timed out, apply timeout classification.
161+
if ctx.Err() != nil {
162+
if classified := q.classifyTimeout(ctx, queryStats, cfg, res.Warnings, qry.Close); classified != nil {
163+
return *classified
164+
}
165+
}
166+
141167
return apiFuncResult{nil, returnAPIError(res.Err), res.Warnings, qry.Close}
142168
}
143169

@@ -159,6 +185,11 @@ func (q *QueryAPI) InstantQueryHandler(r *http.Request) (result apiFuncResult) {
159185
}
160186

161187
ctx := r.Context()
188+
189+
// Always record query start time for phase tracking, regardless of feature flag.
190+
queryStats := stats.FromContext(ctx)
191+
queryStats.SetQueryStart(time.Now())
192+
162193
if to := r.FormValue("timeout"); to != "" {
163194
var cancel context.CancelFunc
164195
timeout, err := util.ParseDurationMs(to)
@@ -170,6 +201,15 @@ func (q *QueryAPI) InstantQueryHandler(r *http.Request) (result apiFuncResult) {
170201
defer cancel()
171202
}
172203

204+
cfg := q.timeoutClassification
205+
ctx, cancel, earlyResult := applyTimeoutClassification(ctx, queryStats, cfg)
206+
if cancel != nil {
207+
defer cancel()
208+
}
209+
if earlyResult != nil {
210+
return *earlyResult
211+
}
212+
173213
opts, err := extractQueryOpts(r)
174214
if err != nil {
175215
return apiFuncResult{nil, &apiError{errorBadData, err}, nil, nil}
@@ -211,6 +251,13 @@ func (q *QueryAPI) InstantQueryHandler(r *http.Request) (result apiFuncResult) {
211251

212252
res := qry.Exec(ctx)
213253
if res.Err != nil {
254+
// If the context was cancelled/timed out, apply timeout classification.
255+
if ctx.Err() != nil {
256+
if classified := q.classifyTimeout(ctx, queryStats, cfg, res.Warnings, qry.Close); classified != nil {
257+
return *classified
258+
}
259+
}
260+
214261
return apiFuncResult{nil, returnAPIError(res.Err), res.Warnings, qry.Close}
215262
}
216263

@@ -281,6 +328,89 @@ func (q *QueryAPI) respond(w http.ResponseWriter, req *http.Request, data any, w
281328
}
282329
}
283330

331+
// applyTimeoutClassification creates a proactive context timeout that fires before
332+
// the PromQL engine's own timeout, adjusted for queue wait time. Returns the
333+
// (possibly wrapped) context, an optional cancel func, and an optional early-exit
334+
// result when the entire timeout budget was already consumed in the queue.
335+
func applyTimeoutClassification(ctx context.Context, queryStats *stats.QueryStats, cfg stats.PhaseTrackerConfig) (context.Context, context.CancelFunc, *apiFuncResult) {
336+
if !cfg.Enabled {
337+
return ctx, nil, nil
338+
}
339+
var queueWaitTime time.Duration
340+
queueJoin := queryStats.LoadQueueJoinTime()
341+
queueLeave := queryStats.LoadQueueLeaveTime()
342+
if !queueJoin.IsZero() && !queueLeave.IsZero() {
343+
queueWaitTime = queueLeave.Sub(queueJoin)
344+
}
345+
effectiveTimeout := cfg.TotalTimeout - queueWaitTime
346+
if effectiveTimeout <= 0 {
347+
return ctx, nil, &apiFuncResult{nil, &apiError{errorTimeout, httpgrpc.Errorf(http.StatusServiceUnavailable,
348+
"query timed out: query spent too long in scheduler queue")}, nil, nil}
349+
}
350+
ctx, cancel := context.WithTimeout(ctx, effectiveTimeout)
351+
return ctx, cancel, nil
352+
}
353+
354+
// classifyTimeout inspects phase timings after a context cancellation/timeout
355+
// and returns an apiFuncResult if the timeout should be converted to a 4XX user error.
356+
// Returns nil if no conversion applies and the caller should use the default error path.
357+
func (q *QueryAPI) classifyTimeout(ctx context.Context, queryStats *stats.QueryStats, cfg stats.PhaseTrackerConfig, warnings annotations.Annotations, closer func()) *apiFuncResult {
358+
if !stats.IsEnabled(ctx) {
359+
return nil
360+
}
361+
362+
queryStats.SetQueryEnd(time.Now())
363+
364+
decision := stats.DecideTimeoutResponse(queryStats, cfg)
365+
366+
fetchTime := queryStats.LoadQueryStorageWallTime()
367+
queryEnd := queryStats.LoadQueryEnd()
368+
totalTime := queryEnd.Sub(queryStats.LoadQueryStart())
369+
evalTime := totalTime - fetchTime
370+
var queueWaitTime time.Duration
371+
queueJoin := queryStats.LoadQueueJoinTime()
372+
queueLeave := queryStats.LoadQueueLeaveTime()
373+
if !queueJoin.IsZero() && !queueLeave.IsZero() {
374+
queueWaitTime = queueLeave.Sub(queueJoin)
375+
}
376+
level.Warn(q.logger).Log(
377+
"msg", "query shard timed out with classification",
378+
"request_id", requestmeta.RequestIdFromContext(ctx),
379+
"query_start", queryStats.LoadQueryStart(),
380+
"query_end", queryEnd,
381+
"queue_wait_time", queueWaitTime,
382+
"query_storage_wall_time", fetchTime,
383+
"eval_time", evalTime,
384+
"total_time", totalTime,
385+
"wall_time", queryStats.LoadWallTime(),
386+
"response_series", queryStats.LoadResponseSeries(),
387+
"fetched_series_count", queryStats.LoadFetchedSeries(),
388+
"fetched_chunk_bytes", queryStats.LoadFetchedChunkBytes(),
389+
"fetched_data_bytes", queryStats.LoadFetchedDataBytes(),
390+
"fetched_samples_count", queryStats.LoadFetchedSamples(),
391+
"fetched_chunks_count", queryStats.LoadFetchedChunks(),
392+
"split_queries", queryStats.LoadSplitQueries(),
393+
"store_gateway_touched_postings_count", queryStats.LoadStoreGatewayTouchedPostings(),
394+
"store_gateway_touched_posting_bytes", queryStats.LoadStoreGatewayTouchedPostingBytes(),
395+
"scanned_samples", queryStats.LoadScannedSamples(),
396+
"peak_samples", queryStats.LoadPeakSamples(),
397+
"decision", decision,
398+
"conversion_enabled", cfg.Enabled,
399+
)
400+
401+
if cfg.Enabled && decision == stats.UserError4XX {
402+
return &apiFuncResult{nil, &apiError{errorExec, httpgrpc.Errorf(http.StatusUnprocessableEntity,
403+
"query timed out: query spent too long in evaluation - consider simplifying your query")}, warnings, closer}
404+
}
405+
406+
if cfg.Enabled {
407+
return &apiFuncResult{nil, &apiError{errorTimeout, httpgrpc.Errorf(http.StatusGatewayTimeout,
408+
"%s", ErrUpstreamRequestTimeout)}, warnings, closer}
409+
}
410+
411+
return nil
412+
}
413+
284414
func (q *QueryAPI) negotiateCodec(req *http.Request, resp *v1.Response) (v1.Codec, error) {
285415
for _, clause := range goautoneg.ParseAccept(req.Header.Get("Accept")) {
286416
for _, codec := range q.codecs {

pkg/api/queryapi/query_api_test.go

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ func Test_CustomAPI(t *testing.T) {
183183

184184
for _, test := range tests {
185185
t.Run(test.name, func(t *testing.T) {
186-
c := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"))
186+
c := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"), stats.PhaseTrackerConfig{})
187187

188188
router := mux.NewRouter()
189189
router.Path("/api/v1/query").Methods("POST").Handler(c.Wrap(c.InstantQueryHandler))
@@ -244,7 +244,7 @@ func Test_InvalidCodec(t *testing.T) {
244244
},
245245
}
246246

247-
queryAPI := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{&mockCodec{}}, regexp.MustCompile(".*"))
247+
queryAPI := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{&mockCodec{}}, regexp.MustCompile(".*"), stats.PhaseTrackerConfig{})
248248
router := mux.NewRouter()
249249
router.Path("/api/v1/query").Methods("POST").Handler(queryAPI.Wrap(queryAPI.InstantQueryHandler))
250250

@@ -285,7 +285,7 @@ func Test_CustomAPI_StatsRenderer(t *testing.T) {
285285
},
286286
}
287287

288-
queryAPI := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"))
288+
queryAPI := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"), stats.PhaseTrackerConfig{})
289289

290290
router := mux.NewRouter()
291291
router.Path("/api/v1/query_range").Methods("POST").Handler(queryAPI.Wrap(queryAPI.RangeQueryHandler))
@@ -441,7 +441,7 @@ func Test_Logicalplan_Requests(t *testing.T) {
441441

442442
for _, tt := range tests {
443443
t.Run(tt.name, func(t *testing.T) {
444-
c := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"))
444+
c := NewQueryAPI(engine, mockQueryable, querier.StatsRenderer, log.NewNopLogger(), []v1.Codec{v1.JSONCodec{}}, regexp.MustCompile(".*"), stats.PhaseTrackerConfig{})
445445
router := mux.NewRouter()
446446
router.Path("/api/v1/query").Methods("POST").Handler(c.Wrap(c.InstantQueryHandler))
447447
router.Path("/api/v1/query_range").Methods("POST").Handler(c.Wrap(c.RangeQueryHandler))

pkg/api/queryapi/util.go

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,10 @@ import (
1616
)
1717

1818
var (
19-
ErrEndBeforeStart = httpgrpc.Errorf(http.StatusBadRequest, "%s", "end timestamp must not be before start time")
20-
ErrNegativeStep = httpgrpc.Errorf(http.StatusBadRequest, "%s", "zero or negative query resolution step widths are not accepted. Try a positive integer")
21-
ErrStepTooSmall = httpgrpc.Errorf(http.StatusBadRequest, "%s", "exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)")
19+
ErrEndBeforeStart = httpgrpc.Errorf(http.StatusBadRequest, "%s", "end timestamp must not be before start time")
20+
ErrNegativeStep = httpgrpc.Errorf(http.StatusBadRequest, "%s", "zero or negative query resolution step widths are not accepted. Try a positive integer")
21+
ErrStepTooSmall = httpgrpc.Errorf(http.StatusBadRequest, "%s", "exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)")
22+
ErrUpstreamRequestTimeout = "upstream request timeout"
2223
)
2324

2425
func extractQueryOpts(r *http.Request) (promql.QueryOpts, error) {

0 commit comments

Comments
 (0)