fix(core): handle partial llm_request in BeforeModel hook override#22326
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug where BeforeModel hooks returning partial LLM requests (specifically, only a model override without messages) would cause a TypeError. The changes introduce robust guards in the fromHookLLMRequest method to correctly handle these partial requests, ensuring that model overrides are applied as intended and existing conversation contents are preserved. This improves the stability and flexibility of the hook system without altering the public API. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
@SandyTao520 This addresses the root cause from #21847, happy to adjust anything before merge. |
There was a problem hiding this comment.
Code Review
This pull request effectively resolves a crash in fromHookLLMRequest when a BeforeModel hook returns a partial llm_request with only a model override. The changes in hookTranslator.ts are robust, adding guards to handle missing messages by falling back to the base request's contents, and correctly applying the model override using nullish coalescing. The accompanying new unit tests in hookTranslator.test.ts are well-written and specifically target the scenarios that caused the original bug, ensuring the fix is properly validated. The changes are correct and address the issue as described.
When a BeforeModel hook returns only a model override without messages
(e.g. { model: 'gemini-2.5-flash' }), fromHookLLMRequest crashed with
TypeError because it called .map() on undefined messages.
Guard hookRequest.messages before mapping, falling back to
baseRequest?.contents to preserve the conversation. Use nullish
coalescing for model so a partial hook response does not overwrite
with undefined.
Fixes google-gemini#21847
32f1af3 to
e52d34e
Compare
|
Closing this as the linked issue #21847 doesn't have the |
|
Thanks for the fix - the crash guard for partial However, the model override still doesn't work end-to-end. I verified this manually: even with the hook returning The
To fully fix the model override scenario, you'd also need to:
The crash fix itself is valid, but the PR description's "How to Validate" scenario (hook routing to a different model) doesn't actually work yet. |
|
We are merging this now and will finish the model override fix ourselves, thanks for your fix for the crash! |
|
Thanks for approving! I am working on the end-to-end model override fix, |
|
Sure, if you have an e2e fix please open a PR and we'll review |
|
Are you talking to me |
|
@SandyTao520 Follow-up PR is open at #24784, covers all three changes you identified: |
Summary
fromHookLLMRequestcrashed withTypeError: Cannot read properties of undefined (reading 'map')when aBeforeModelhook returned a partialllm_requestcontaining only amodeloverride (nomessages). The model override never applied because the crash happened first.Details
Two targeted guards added to
fromHookLLMRequestinHookTranslatorGenAIv1:hookRequest.messagesbefore calling.map(). If absent, fall back tobaseRequest?.contentsso the existing conversation is preserved.hookRequest.model ?? baseRequest?.model ?? ''so a hook returning only{ model: '...' }correctly overrides the model without clobbering it withundefined.No changes to the public hook API, hooks that already return full
llm_requestobjects are unaffected.Related Issues
Fixes #21847
How to Validate
BeforeModelhook that returns only{ hookSpecificOutput: { hookEventName: "BeforeModel", llm_request: { model: "gemini-2.5-flash" } } }gemini-2.5-flash-litevia/model set gemini-2.5-flash-litegemini-2.5-flashnpx vitest run packages/core/src/hooks/hookTranslator.test.tsPre-Merge Checklist
messagesare unaffected