feat: Update syntax of custom torch ops#96
feat: Update syntax of custom torch ops#96chichun-charlie-liu merged 6 commits intofoundation-model-stack:mainfrom
Conversation
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
|
can we use a "version check" approach like the one we used for external kernels here? |
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
Signed-off-by: andrea-fasoli <110120121+andrea-fasoli@users.noreply.github.com>
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
|
addons updated with branching based on pytorch version. |
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
ac534a5 to
5cfc9bf
Compare
| torch_version = Version(torch.__version__.split("+", maxsplit=1)[0]) | ||
|
|
||
|
|
||
| def implement_op_decorator(pt_ver, op_namespace_id): |
There was a problem hiding this comment.
do we really need to pass pt_ver as an arg to this func? it can access torch_version defined on L27 directly. Unless there is a case we want to register using a syntax that is lower than the current installed PT version?
There was a problem hiding this comment.
pt_ver is not needed at this time but l plan to move these decorators somewhere under utils and import them (they are shared between gptq and int8), so I'd prefer for them to be more general and not force users to declare a global torch_version variable, even in future addons.
Signed-off-by: Andrea Fasoli <andrea.fasoli@ibm.com>
3f07692
into
foundation-model-stack:main
Description of the change
This PR updates the syntax of registration of a custom op with pytorch. The purpose of registration is to create a custom operation that can be inserted as custom node in the computational graph without inducing a graph break.
From PyTorch 2.4, new
torch.libraryfunctions have been introduced:custom_opreplacingimplregister_fakereplacingimpl_abstractThey streamline earlier implementations of the custom op registration process. With the new syntax, there is no need for an additional op definition using
torch.library.define.The earlier syntax is deprecated from PyTorch >= 2.6:
impl_abstractredirects toregister_fakeand throws a warning.However, the new functions introduce a severe lower bound to
fms-mo-supported PyTorch versions (>= 2.4), so we may want to hold to this update until later on.Was the PR tested