Conversation
|
@codex review |
1 similar comment
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 48ca345b97
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Co-authored-by: Codex <noreply@openai.com>
|
@codex review |
1 similar comment
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a650996b96
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let stage_one_context = build_request_context(context.as_ref(), config.as_ref()).await; | ||
| let _phase_one_e2e_timer = stage_one_context.start_timer(MEMORY_PHASE_ONE_E2E_MS); | ||
|
|
||
| // 1. Claim startup job. | ||
| let Some(claimed_candidates) = claim_startup_jobs(context.as_ref(), &config.memories).await |
There was a problem hiding this comment.
Claim jobs before building stage-1 request context
Move build_request_context after the startup-job claim/empty checks. The current order does model-info lookup and turn-metadata computation even when there are zero candidates, adding avoidable latency and load on every startup path that skips phase-1 work. This also contradicts the function’s documented step order.
Useful? React with 👍 / 👎.
| let model_client = ModelClient::new( | ||
| Some(Arc::clone(&self.auth_manager)), | ||
| self.thread_id, | ||
| installation_id, | ||
| config.model_provider.clone(), | ||
| session_source, | ||
| config.model_verbosity, | ||
| config.features.enabled(Feature::EnableRequestCompression), | ||
| config.features.enabled(Feature::RuntimeMetrics), | ||
| /*beta_features_header*/ None, | ||
| ); |
There was a problem hiding this comment.
Pass beta headers when streaming stage-1 prompts
The phase-1 path reconstructs a fresh ModelClient with beta_features_header forced to None. This drops configured beta feature headers that the normal session client carries, so memory extraction requests can diverge or fail for providers/models that require those beta headers.
Useful? React with 👍 / 👎.
| .with_model(model_name, model_name), | ||
| reasoning_effort: Some(reasoning_effort), | ||
| reasoning_summary, | ||
| service_tier: config.service_tier, |
There was a problem hiding this comment.
Preserve resolved service tier in stage-1 request context
Stage-1 now copies service_tier directly from raw config. Previously this came from turn context, which includes runtime tier resolution (for example fast-mode defaults for eligible plans). Using raw config can downgrade phase-1 requests to the wrong tier, increasing latency/cost mismatch versus normal turns.
Useful? React with 👍 / 👎.
Keep extracting memories out of core and moving the write trigger in the app-server
This is temporary and it should move at the client level as a follow-up
This makes core fully independant from
codex-memories-write