TorchScript Error Unknown builtin op

Hi there,

I am trying to use a torch script exported module traced on iOS device but I am getting this error:

ibc++abi.dylib: terminating with uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::_batch_norm_impl_index_backward.
Could not find any similar ops to aten::_batch_norm_impl_index_backward. This op may not exist or may not be currently supported in TorchScript.
:
  File "<string>", line 19

            def backward(grad_output):
                dinput, dweight, dbias = torch._batch_norm_impl_index_backward(
                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
                    impl_idx, input, grad_output, weight, running_mean, running_var,
                    save1, save2, training, eps, [True, has_weight, has_bias], reserve)

This only occurs when loading the traced script on mobile. Using a desktop platform runs ok.
I have read a related issue pytorch repository issue but it seems that it does not fix the problem in this case.
Could anyone help me with this?

Thanks

Hey @DTSED were you able to figure this out? I’m encountering the same issue, not sure what in my torchscript model is triggering it.

@xta0, I can replicate the error on iOS using the code below. Not sure if this is a regression because I don’t see how backward is called anywhere.

Python code:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(784, 392)
        self.fc2 = nn.Linear(392, 10)

    def forward(self, x):
        x = self.fc1(x)
        x = nn.functional.relu(x)
        x = self.fc2(x)
        return x

n = Net()
X = th.randn(3, 28 * 28)


simple_net = th.jit.trace(n, X)

iOS C++ Code

auto qengines = at::globalContext().supportedQEngines();
if (std::find(qengines.begin(), qengines.end(), at::QEngine::QNNPACK) != qengines.end()) {
    at::globalContext().setQEngine(at::QEngine::QNNPACK);
}
_impl = torch::jit::load(filePath.UTF8String);
_impl.eval();

std::vector<torch::jit::IValue> modelArgs;

at::Tensor tensor = torch::from_blob(data, trainingDataVectorShape, at::kFloat);
modelArgs.push_back(tensor);

auto result = _impl.forward(modelArgs);

Resulting error:

libc++abi.dylib: terminating with uncaught exception of type torch::jit::ErrorReport: 
Unknown builtin op: aten::_batch_norm_impl_index_backward.
Could not find any similar ops to aten::_batch_norm_impl_index_backward. This op may not exist or may not be currently supported in TorchScript.
:
  File "<string>", line 19

            def backward(grad_output):
                dinput, dweight, dbias = torch._batch_norm_impl_index_backward(
                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
                    impl_idx, input, grad_output, weight, running_mean, running_var,
                    save1, save2, training, eps, [True, has_weight, has_bias], reserve)

@DTSED @mark_jimenez can you guys try adding torch::autograd::AutoGradMode guard(false); before calling torch.jit.load and forward?

    at::AutoNonVariableTypeMode nonVarTypeModeGuard(true);
    torch::autograd::AutoGradMode guard(false);
    auto model = torch::jit::load(path.UTF8String);
    auto input = torch::randn({3,784});
    auto output = model.forward({input});
    std::cout<<output.toTensor().sizes()<<std::endl;

Autograd features have been disabled on mobile, the current workaround is to use the RAII guard above. torch::autograd::AutoGradMode guard(false); is similar to with torch.no_grad(): in python.

1 Like

Works great, thank you again @xtao !

1 Like

It works for me too, sorry for the delay responding.
Thanks