Skip to content

Widen resolve_max_new_tokens parameters to int64_t and rename for clarity (#18917)#18917

Merged
meta-codesync[bot] merged 1 commit into
mainfrom
export-D99769848
Apr 24, 2026
Merged

Widen resolve_max_new_tokens parameters to int64_t and rename for clarity (#18917)#18917
meta-codesync[bot] merged 1 commit into
mainfrom
export-D99769848

Conversation

@kirklandsign
Copy link
Copy Markdown
Contributor

@kirklandsign kirklandsign commented Apr 15, 2026

Summary:

The second parameter was named num_prompt_tokens (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass pos_
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

  • Rename num_prompt_tokensnum_tokens_occupied to match actual
    semantics
  • Widen both parameters from int32_t to int64_t to eliminate implicit
    narrowing conversions from int64_t callers
  • Use int64_t internally to avoid truncation during intermediate
    arithmetic
  • Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848

Copilot AI review requested due to automatic review settings April 15, 2026 23:19
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 15, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18917

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ 6 Pending, 2 Unrelated Failures

As of commit 0bd2f52 with merge base c3f3d12 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 15, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 15, 2026

@kirklandsign has exported this pull request. If you are a Meta employee, you can view the originating Diff in D99769848.

Copy link
Copy Markdown
Contributor

@larryliu0820 larryliu0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates GenerationConfig::resolve_max_new_tokens to better match how runners track context usage by renaming the second parameter to reflect “occupied tokens” semantics and widening its parameters to int64_t, with corresponding updates across C++/Python bindings, tests, and docs.

Changes:

  • Rename num_prompt_tokensnum_tokens_occupied and update docstrings/comments to match actual semantics (occupied context positions).
  • Widen resolve_max_new_tokens parameters (and internal arithmetic) to int64_t.
  • Update Python binding arg name, .pyi stub, unit tests, and C++ docs to reflect the new API.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
extension/llm/runner/irunner.h Renames and widens resolve_max_new_tokens parameters; switches intermediate arithmetic to int64_t.
extension/llm/runner/pybindings.cpp Updates pybind keyword argument name for resolve_max_new_tokens.
extension/llm/runner/_llm_runner.pyi Updates type stub signature and docstring to the new parameter name/meaning.
extension/llm/runner/test/test_generation_config.cpp Updates test comments to match the new parameter name/meaning.
extension/llm/runner/test/test_runner_pybindings.py Adds coverage for calling resolve_max_new_tokens using the new keyword argument name.
docs/source/llm/run-with-c-plus-plus.md Updates docs to the new signature and clarified semantics.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +86 to +98
result = std::min(
static_cast<int64_t>(max_new_tokens),
max_context_len - num_tokens_occupied);
} else if (seq_len != -1 && max_new_tokens == -1) {
// Only seq_len is specified
result = std::min(seq_len, max_context_len) - num_prompt_tokens;
result = std::min(static_cast<int64_t>(seq_len), max_context_len) -
num_tokens_occupied;
} else {
// Both are specified
result = std::min(
std::min(seq_len, max_context_len) - num_prompt_tokens,
max_new_tokens);
std::min(static_cast<int64_t>(seq_len), max_context_len) -
num_tokens_occupied,
static_cast<int64_t>(max_new_tokens));
Comment on lines 101 to 103
// Ensure result is not negative
return std::max(0, result);
return static_cast<int32_t>(std::max(static_cast<int64_t>(0), result));
}
Comment on lines 296 to 301
.def(
"resolve_max_new_tokens",
&GenerationConfig::resolve_max_new_tokens,
py::arg("max_context_len"),
py::arg("num_prompt_tokens"),
py::arg("num_tokens_occupied"),
"Resolve the maximum number of new tokens to generate based on constraints")
@meta-codesync meta-codesync Bot changed the title Widen resolve_max_new_tokens parameters to int64_t and rename for clarity Widen resolve_max_new_tokens parameters to int64_t and rename for clarity (#18917) Apr 15, 2026
meta-codesync Bot pushed a commit that referenced this pull request Apr 15, 2026
…rity (#18917)

Summary:

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
meta-codesync Bot pushed a commit that referenced this pull request Apr 15, 2026
…rity (#18917)

Summary:

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
kirklandsign added a commit that referenced this pull request Apr 15, 2026
…rity (#18917)

Summary:
Pull Request resolved: #18917

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
kirklandsign added a commit that referenced this pull request Apr 15, 2026
…rity (#18917)

Summary:
Pull Request resolved: #18917

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
@kirklandsign
Copy link
Copy Markdown
Contributor Author

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 24, 2026

Claude finished @kirklandsign's task in 1m 12s —— View job


Review of PR #18917

  • Read changed files and understand the diff
  • Review irunner.h changes (core logic)
  • Review pybindings and type stubs
  • Review tests
  • Review documentation
  • Post review feedback

Overall this is a clean, well-motivated change. The rename from num_prompt_tokens to num_tokens_occupied correctly reflects what callers actually pass (pos_, which is int64_t and represents all occupied context positions, not just the current prompt). The widening to int64_t eliminates implicit narrowing from the three call sites in text_llm_runner.cpp and multimodal_runner.cpp.

I have two substantive findings and one minor note:

1. Missing #include <algorithm> (agree with Copilot)

irunner.h uses std::min and std::max at lines 86-102 but does not include <algorithm>. This works today only because some other transitively-included header pulls it in, but that's fragile. Adding #include <algorithm> after line 13 (#include <cstdint>) would make the header self-contained.

Fix this →

2. Narrowing int64_tint32_t return on line 102

The whole point of this PR is to widen to int64_t to avoid truncation, but the return statement still narrows back:

return static_cast<int32_t>(std::max(static_cast<int64_t>(0), result));

In practice, result will not exceed INT32_MAX for any realistic model (context lengths are well under 2 billion tokens today), so this is not a correctness bug right now. However, it partially undermines the widening rationale — if a future model had max_context_len > INT32_MAX, the widened arithmetic would compute correctly but the result would silently truncate on return.

This is a design call: if you want full end-to-end int64_t safety, the return type should change to int64_t too, which would require updating callers (int32_t max_new_tokens = ... in multimodal_runner.cpp:152,310 and text_llm_runner.cpp:172). If the goal is just to fix the narrowing at the call site (which it is, per the PR description), then the current approach is fine for now. Worth a brief note in the PR description or a follow-up comment so future readers understand the intent.

3. Python keyword rename is a breaking change (minor, informational)

Renaming the pybind keyword from num_prompt_tokens to num_tokens_occupied (pybindings.cpp:300) will break any downstream Python code calling resolve_max_new_tokens(num_prompt_tokens=...). Positional callers are unaffected. Given this is a relatively new API and the new name is strictly more accurate, this seems intentional and acceptable — just calling it out for awareness.

Summary

The change is correct and well-tested. The rename and widening achieve the stated goals. The missing <algorithm> include is the only actionable item I'd suggest fixing before merge.


meta-codesync Bot pushed a commit that referenced this pull request Apr 24, 2026
…rity (#18917)

Summary:

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
Copilot AI review requested due to automatic review settings April 24, 2026 00:41
@kirklandsign kirklandsign review requested due to automatic review settings April 24, 2026 00:41
kirklandsign added a commit that referenced this pull request Apr 24, 2026
…rity (#18917)

Summary:
Pull Request resolved: #18917

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
…rity (#18917)

Summary:

The second parameter was named `num_prompt_tokens` (int32_t) but all
callers (TextLLMRunner, MultimodalRunner) actually pass `pos_`
(int64_t), which represents the total number of occupied positions in
the context window — not just the current prompt's tokens.

- Rename `num_prompt_tokens` → `num_tokens_occupied` to match actual
  semantics
- Widen both parameters from int32_t to int64_t to eliminate implicit
  narrowing conversions from int64_t callers
- Use int64_t internally to avoid truncation during intermediate
  arithmetic
- Update pybinding arg name, .pyi type stub, tests, and docs

Reviewed By: larryliu0820

Differential Revision: D99769848
@kirklandsign
Copy link
Copy Markdown
Contributor Author

@claude review

@claude
Copy link
Copy Markdown

claude Bot commented Apr 24, 2026

Claude finished @kirklandsign's task in 59s —— View job


Review of PR #18917

  • Read changed files and understand the diff
  • Review irunner.h changes (core logic)
  • Review pybindings and type stubs
  • Review tests
  • Review documentation
  • Post review feedback

This is a clean, well-motivated change. The rename from num_prompt_tokens to num_tokens_occupied correctly reflects what callers actually pass — pos_ (which is int64_t and represents all occupied context positions, not just the current prompt). The widening to int64_t eliminates implicit narrowing from the call sites in text_llm_runner.cpp:189, multimodal_runner.cpp:152, and multimodal_runner.cpp:310.

I see the previous review's #include <algorithm> suggestion has been addressed — it's now present at irunner.h:13. The tests and docs are updated consistently. One substantive finding and one minor observation:

1. int32_t return type partially undermines the widening (design observation)

At irunner.h:103:

return static_cast<int32_t>(std::max(static_cast<int64_t>(0), result));

The intermediate arithmetic is correctly done in int64_t, but the final static_cast<int32_t> narrows back. If result ever exceeded INT32_MAX, this would silently truncate. In practice this won't happen with current models (context lengths are well under 2 billion), so this isn't a correctness bug today.

However, it does mean the callers also store into int32_t (multimodal_runner.cpp:152,310) or int (text_llm_runner.cpp:188), perpetuating the narrow type through the generation loop. If a future follow-up widens seq_len and max_new_tokens members to int64_t as well, changing the return type here at the same time would be natural. For this PR's stated goal of fixing the input narrowing, the current approach is fine.

2. Caller uses int instead of int32_t (pre-existing, minor)

At text_llm_runner.cpp:188:

int max_new_tokens =
    config.resolve_max_new_tokens(max_context_len, effective_pos);

This uses int rather than int32_t. While int is 32-bit on all ExecuTorch target platforms, using int32_t would be more consistent with the function's declared return type and the rest of the codebase. Not introduced by this PR, but worth noting as a future cleanup.

Summary

The change is correct and well-tested. The rename accurately reflects semantics, the widening eliminates the narrowing at call sites, <algorithm> is properly included, and all six touched files are consistent. LGTM.


@meta-codesync meta-codesync Bot merged commit eef7921 into main Apr 24, 2026
175 of 179 checks passed
@meta-codesync meta-codesync Bot deleted the export-D99769848 branch April 24, 2026 04:29
zeel2104 pushed a commit to zeel2104/executorch that referenced this pull request May 5, 2026
…rity (pytorch#18917)

Differential Revision: D99769848

Pull Request resolved: pytorch#18917
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants