Reject CoreML delegation for unsupported input dtypes#19245
Reject CoreML delegation for unsupported input dtypes#19245john-rocky wants to merge 2 commits intopytorch:mainfrom
Conversation
coremltools' torch->MIL converter only knows the dtypes in TORCH_DTYPE_TO_MIL_DTYPE. When a node had any other input dtype (notably torch.uint8 or torch.int8) the partitioner still tagged it for delegation; the failure surfaced later as a KeyError raised from deep inside coremltools, masking the underlying cause and crashing the export. Reject such nodes up front in should_override_support so they fall back to the portable backend. Also wrap the coremltools support query in a try/except as a defensive measure for any other case where the call might raise instead of returning False. Fixes pytorch#11686.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19245
Note: Links to docs will display an error until the docs builds have been completed.
|
|
Hi @john-rocky! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
This PR needs a
|
Summary
CoreMLPartitionerqueriescoremltools.converters.mil.frontend.torch.is_torch_fx_node_supportedto decide whether a node can be delegated, but that function only inspects the op
name — it does not validate input dtypes. When a graph contains a node whose
inputs use a dtype outside
TORCH_DTYPE_TO_MIL_DTYPE(e.g.torch.uint8/torch.int8), the partitioner happily tags it and the failure only surfaceslater from deep inside coremltools as a
KeyError, aborting the whole export.This change adds an explicit dtype check in
should_override_supportso suchnodes fall back to the portable backend, and wraps the support query in
try/exceptas a defensive measure for any other case where the call mightraise instead of returning
False.Fixes #11686.
Test plan
Added
test_unsupported_dtype_does_not_crash_partitionerwhich lowerstorch.abson auint8input viato_edge_transform_and_lowerand assertsthe op is not delegated. Verified locally on macOS 15 / Python 3.10 /
coremltools 9.0:
Authored with Claude.