Skip to content

Fix UnboundLocalError in invert_attention_mask by adding proper shape…#45247

Open
gagandhakrey wants to merge 2 commits intohuggingface:mainfrom
gagandhakrey:fix/invert-attention-mask-unboundlocalerror
Open

Fix UnboundLocalError in invert_attention_mask by adding proper shape…#45247
gagandhakrey wants to merge 2 commits intohuggingface:mainfrom
gagandhakrey:fix/invert-attention-mask-unboundlocalerror

Conversation

@gagandhakrey
Copy link
Copy Markdown

… validation

What does this PR do?

Problem The invert_attention_mask function in src/transformers/modeling_utils.py crashed with an UnboundLocalError when given an encoder_attention_mask shape that wasn't exactly 2D or 3D. Because it only checked for 2D and 3D shapes using separate if statements without any elif or else fallback, the encoder_extended_attention_mask variable would remain unassigned before attempting a .to() cast, causing models with 1D or 4D inputs to fail silently with confusing stack traces.

Fix Refactored the second conditional check to use an elif and added a final else branch. If the passed tensor is neither 2D nor 3D, it now explicitly raises a ValueError detailing the incorrect tensor shape, matching the behavior of the nearby get_extended_attention_mask function and preventing the UnboundLocalError entirely.

Fixes # (issue)

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant