Skip to content

feat: Update syntax of custom torch ops#96

Merged
chichun-charlie-liu merged 6 commits intofoundation-model-stack:mainfrom
andrea-fasoli:custom_ops_syntax
Apr 23, 2025
Merged

feat: Update syntax of custom torch ops#96
chichun-charlie-liu merged 6 commits intofoundation-model-stack:mainfrom
andrea-fasoli:custom_ops_syntax

Conversation

@andrea-fasoli
Copy link
Copy Markdown
Collaborator

Description of the change

This PR updates the syntax of registration of a custom op with pytorch. The purpose of registration is to create a custom operation that can be inserted as custom node in the computational graph without inducing a graph break.

From PyTorch 2.4, new torch.library functions have been introduced:

  • custom_op replacing impl
  • register_fake replacing impl_abstract
    They streamline earlier implementations of the custom op registration process. With the new syntax, there is no need for an additional op definition using torch.library.define.

The earlier syntax is deprecated from PyTorch >= 2.6: impl_abstract redirects to register_fake and throws a warning.
However, the new functions introduce a severe lower bound to fms-mo-supported PyTorch versions (>= 2.4), so we may want to hold to this update until later on.

Was the PR tested

  • I have ensured all unit tests pass

Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
@chichun-charlie-liu
Copy link
Copy Markdown
Collaborator

can we use a "version check" approach like the one we used for external kernels here?

andrea-fasoli and others added 3 commits April 18, 2025 21:18
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
Signed-off-by: andrea-fasoli <110120121+andrea-fasoli@users.noreply.github.com>
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
@andrea-fasoli
Copy link
Copy Markdown
Collaborator Author

addons updated with branching based on pytorch version.
There's some duplication of the version-dependent decorators between GPTQ and INT8. We can eventually move these to fms_mo/utils/aiu_utils.py and import them from there.
Notice FMS forces torch==2.5.1 at this time, so this version check is only relevant when FMS-MO is used independently from FMS.

Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
Comment thread fms_mo/aiu_addons/gptq/gptq_aiu_op.py Outdated
torch_version = Version(torch.__version__.split("+", maxsplit=1)[0])


def implement_op_decorator(pt_ver, op_namespace_id):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we really need to pass pt_ver as an arg to this func? it can access torch_version defined on L27 directly. Unless there is a case we want to register using a syntax that is lower than the current installed PT version?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pt_ver is not needed at this time but l plan to move these decorators somewhere under utils and import them (they are shared between gptq and int8), so I'd prefer for them to be more general and not force users to declare a global torch_version variable, even in future addons.

Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
@chichun-charlie-liu chichun-charlie-liu merged commit 3f07692 into foundation-model-stack:main Apr 23, 2025
11 checks passed
@andrea-fasoli andrea-fasoli deleted the custom_ops_syntax branch April 23, 2025 16:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants