Does torch.jit.script support custom operators?

I tried to compile a module that contains a custom op defined by torch.autograd.Function:

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Function

class mul2(Function):
    @staticmethod
    def forward(ctx, x):
        return x * 2

    @staticmethod
    def backward(ctx, dx):
        return dx * 2

def f(a, b):
    c = a + b
    d = mul2.apply(c)
    e = torch.tanh(d * c)
    return d + (e + e)

print(torch.jit.script(f).code)

and I received

Traceback (most recent call last):
  File "revisble.py", line 21, in <module>
    print(torch.jit.script(f).code)
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1226, in script
    fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj))
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/__init__.py", line 1075, in _compile_and_register_class
    ast = get_jit_class_def(obj, obj.__name__)
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 148, in get_jit_class_def
    self_name=self_name) for method in methods]
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 148, in <listcomp>
    self_name=self_name) for method in methods]
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 169, in get_jit_def
    return build_def(ctx, py_ast.body[0], type_line, self_name)
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 198, in build_def
    param_list = build_param_list(ctx, py_def.args, self_name)
  File "/Users/***/anaconda3/lib/python3.7/site-packages/torch/jit/frontend.py", line 224, in build_param_list
    raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
at /Users/***/anaconda3/lib/python3.7/site-packages/torch/autograd/function.py:26:25
    def mark_dirty(self, *args):
                         ~~~~~ <--- HERE
        r"""Marks given tensors as modified in an in-place operation.

        **This should be called at most once, only from inside the**
        :func:`forward` **method, and all arguments should be inputs.**

        Every tensor that's been modified in-place in a call to :func:`forward`
        should be given to this function, to ensure correctness of our checks.
        It doesn't matter whether the function is called before or after
        modification.
'mul2' is being compiled since it was called from 'f'
at revisble.py:17:4
def f(a, b):
    c = a + b
    d = mul2.apply(c)
    ~~~~~~~~~~~~~~~~ <--- HERE
    e = torch.tanh(d * c)
    return d + (e + e)

torch.jit.trace does work either. May I know do script mode support custom ops? If so, what is the correct way to handle custom op?

Thanks!

We currently don’t support autograd.Function in Python. Right now the workaround is to define the function in C++ and bind it to TorchScript as a custom op. This was done in pytorch/vision to support the Mask R-CNN model, you can see the specific implementation here.

2 Likes

@driazati thank you for your timely answer.
It seems I can only write forward function in current torchscript custom ops, right? What if users would like to customize backward functions like what we did in torch.autograd.Function?

If your C++ op calls a C++ autograd op (i.e. a class that extends public torch::autograd::Function), it will act the same as torch.autograd.Function. For the code below, if you bind my_cool_op and call it from TorchScript, it will use the backward you defined in MyCoolOp

#include <torch/all.h>
#include <torch/python.h>

class MyCoolOp : public torch::autograd::Function<MyCoolOp> {
 public:
  static torch::autograd::variable_list forward(
      torch::autograd::AutogradContext* ctx,
      torch::autograd::Variable input) {
    // forward calculation
  }

  static torch::autograd::variable_list backward(
      torch::autograd::AutogradContext* ctx,
      torch::autograd::variable_list grad_output) {
    // backward calculation
  }
};

torch::Tensor my_cool_op(const torch::Tensor& input) {
  return MyCoolOp::apply(input);
}
4 Likes

Hi Driazati, I tried to build my custom operation and the compilation worked. When I load my ops, I got below error. The error even happened when I load the example in documentation. Could you please help me out? Thank you so much!

print(torch.ops.my_ops.SpikeFunction)
Traceback (most recent call last):
File “”, line 1, in
File “/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/_ops.py”, line 61, in getattr
op = torch._C._jit_get_operation(qualified_op_name)
RuntimeError: No such operator my_ops::SpikeFunction

The code I used to load the module:

torch.utils.cpp_extension.load(
… name=“SpikeFunction”,
… sources=[“spikefunction.cpp”],
… is_python_module=False,
… verbose=True
… )
Using /home/guozhang/.cache/torch_extensions as PyTorch extensions root…
Emitting ninja build file /home/guozhang/.cache/torch_extensions/SpikeFunction/build.ninja…
Building extension module SpikeFunction…
Allowing ninja to set a default number of workers… (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF spikefunction.o.d -DTORCH_EXTENSION_NAME=SpikeFunction -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/TH -isystem /calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/include/THC -isystem /calc/guozhang/anaconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/guozhang/multiplicative_rule/eprop/spikefunction.cpp -o spikefunction.o
[2/2] c++ spikefunction.o -shared -L/calc/guozhang/anaconda3/lib/python3.8/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o SpikeFunction.so
Loading extension module SpikeFunction…

The C++ code I wrote:
#include <torch/all.h>
#include <torch/python.h>

class SpikeFunction : public torch::autograd::Function {
public:
static torch::autograd::variable_list forward(
torch::autograd::AutogradContext* ctx,
torch::autograd::Variable v_scaled,
torch::autograd::Variable dampening_factor) {
// forward calculation
ctx->save_for_backward({v_scaled,dampening_factor});
return {torch::greater(v_scaled, 0.)};
}

static torch::autograd::variable_list backward(
torch::autograd::AutogradContext* ctx,
torch::autograd::variable_list grad_output) {
// backward calculation
auto saved = ctx->get_saved_variables();
auto v_scaled = saved[0];
auto dampening_factor = saved[1];
auto dE_dz = grad_output[0];
auto dz_dv_scaled = torch::maximum(1 - torch::abs(v_scaled), torch::zeros_like(v_scaled)) * dampening_factor;
auto dE_dv_scaled = dE_dz * dz_dv_scaled;
return {dE_dv_scaled, torch::zeros_like(dampening_factor)};
}
};

torch::autograd::variable_list SpikeFunction(const torch::Tensor& v_scaled, const torch::Tensor& dampening_factor) {
return SpikeFunction::apply(v_scaled,dampening_factor);
}

Instead using csrc, I wanna wrap some python line code sinto a single op, then later on impement in a new op in another inference engine.

How to export that?