Converting a Pytorch model to Torch Script program via Tracing Using Pytorch 1.0: Not working

I am trying the new tutorial from Pytorch (dev version) which shows how to load a Pytorch model in C++ without any Python dependencies. I am using the tracing method:

import torch
import torchvision

# An instance of your model.
model = A UNET MODEL FROM FASTAI which has hooks as required by UNET

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)

However, I got a value error:

ValueError: Modules that have hooks assigned can't be compiled

   1118         if orig._backward_hooks or orig._forward_hooks or orig._forward_pre_hooks:
-> 1119             raise ValueError("Modules that have hooks assigned can't be compiled")
   1121         for name, submodule in orig._modules.items():

I am using it for UNET MODEL from FASTAI. I was wondering if anyone else has tried it and would like to share their experiences if it worked. Also, does anyone if this new feature comes with a limitation of only working for very simple models?

For reference, the FASTAI Notebook with the relevant UNET definition.

1 Like

Hi @safi, do you mind opening an issue in so that we can triage it to the right folks?

I tried to force the compilation by ignoring all the hooks. It compiled, but the inference does not work (pink squares on my output instead of the picture expected).
I don’t have the time to dig into the DynamicUnet implementation for now saddly :confused:

Just in case you want to try with different models, here is the SO thread ( )