Simple Conv2d Function cannot be scripted and reports Runtime Error.

Simple Conv2d Function cannot be scripted and reports Runtime Error.

Here is my simple Conv2d module, I want to script it using torch.jit.script.

import torch

class Conv2dCell(torch.nn.Module):
  def __init__(self):
    super(Conv2dCell, self).__init__()

  def forward(self, x):
    conv = torch.nn.Conv2d(1, 3, 3, stride=1)
    output = conv(x)
    return output

m = Conv2dCell()
scripted_m = torch.jit.script(m)

Running this piece of code will give the following error message:

Traceback (most recent call last):
File “conv2d.py”, line 13, in
scripted_m = torch.jit.script(m)
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/init.py”, line 1261, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 361, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/_recursive.py”, line 279, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/jit/init.py”, line 1108, in _compile_and_register_class
_jit_script_class_compile(qualified_name, ast, rcb)
RuntimeError:
Arguments for call are not valid.
The following variants are available:

_pair(float[2] x) → (float):
Expected a value of type ‘List[float]’ for argument ‘x’ but instead found type ‘Tensor’.

_pair(int[2] x) → (int):
Expected a value of type ‘List[int]’ for argument ‘x’ but instead found type ‘Tensor’.

The original call is:
File “/mnt/ssd/maxhy/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py”, line 336
padding=0, dilation=1, groups=1,
bias=True, padding_mode=‘zeros’):
kernel_size = _pair(kernel_size)
~~~~~ <— HERE
stride = _pair(stride)
padding = _pair(padding)
‘Conv2d.init’ is being compiled since it was called from ‘Conv2d’
File “conv2d.py”, line 8
def forward(self, x):
conv = torch.nn.Conv2d(1, 3, 3, stride=1)
~~~~~~~~~~~~~~~ <— HERE
output = conv(x)
return output
‘Conv2d’ is being compiled since it was called from ‘Conv2dCell.forward’
File “conv2d.py”, line 8
def forward(self, x):
conv = torch.nn.Conv2d(1, 3, 3, stride=1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
output = conv(x)
return output

I am using PyTorch 1.5.1 and python 3.6.13, could someone help me identify the problem?

You are recreating a conv layer in the forward, which is most likely wrong.
(The usual way would be to initialize it in the __init__ method and use it in the forward)
Note that this would initialize a new layer in each forward pass and thus it won’t be trained.
If that’s intended, could you explain your use case a bit more?

Hi ptrblck,
No special use case here. I just want to know if this (cannot define Conv2d layer in forward function) is a restriction in Pytorch? Is there formal documentation for this? And also I notice that we can actually define some simple ops in forward function like add or relu, what is the difference between simple ops and more complicated ops like Conv2d?

I wouldn’t call it a restriction, but maybe an unwanted usage.
Modules containing parameters are initialized before they can be used.
E.g. conv and linear layers contain trainable parameters and you would create objects first:

conv = nn.Conv2d(3, 6, 3, 1, 1)
lin = nn.Linear(10, 10)

Later in your code you would then use these modules and feed an activation to them:

out_conv = conv(in_conv)
out_lin = lin(in_lin)

The common use case it to create these layers in the __init__ method of your custom module and use them in the forward.
However, you could of course create them outside of the __init__ and pass them to it or even pass them to the forward method.
The issue in your code is created, since you are recreating the modules in each forward pass, which will thus reset the parameters every time the forward pass is executed.
The nn tutorial might explain it in more detail.

Modules are computation blocks with state, can we put it this way? But simple ops like add do not hold states. And what you explained makes sense because during training we need to save states for every iteration.

But Torchscript is often used for inference. I don’t see why we cannot script or trace a forward function with a submodule defined in it. From my understanding, scripting or tracing a Function(or a Module) does not save states like weight or bias right? It will just record a pure function path.

Yes, the first explanation of “stateful” modules makes sense.

I’m not sure how TorchScript is related to this. Note that you surely can re-initialize modules in the forward pass, if you explicitly don’t want to train these layers and want to create new random parameters.
A scripted model should respect this workflow (even if it’s wrong from the point of view of training the model).

1 Like