Same code crashes in a different pytorch version

Hi, when my tensor:

tensor([[[-0.2132, -0.0098, -0.2187,  0.2484, -0.2478, -0.1970],
         [-0.1974, -0.0097, -0.1878,  0.2076, -0.2087, -0.1970],
         [-0.2494, -0.0098, -0.2379,  0.2802, -0.2788, -0.2645],
         [-0.2179, -0.0098, -0.2074,  0.2394, -0.2392, -0.2645]]])

Passes through the following layer:

    self.conv1 = nn.Conv2d(in_channels=1, out_channels=24, kernel_size=2, padding=1)

With versions: Python: 3.8 , pytorch: 1.11.0 , It executes without problem

But with the versions, Python: 3.7.13, pytorch 1.10.0 it gets the error:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [24, 1, 2, 2], but got 3-dimensional input of size [1, 4, 6] instead

It is hard for me to understand because both the layer and the tensor are the same in both cases. (I needed that code to be executable in the older version)

EDIT: I’m checking the documentation of the Conv2d layer for both versions, and everythings seems the same.

Hi Pablo!

Pytorch version 1.11 made a flexibility improvement to Conv2d so that it
no longer requires a batch dimension. Quoting from Conv2d’s 1.10 and
1.11 documentation, respectively:


   Input: (N,Cin,Hin,Win)(N, C_{in}, H_{in}, W_{in})(N,Cin​,Hin​,Win​)


   Input: (N,Cin,Hin,Win)(N, C_{in}, H_{in}, W_{in})(N,Cin​,Hin​,Win​) or (Cin,Hin,Win)(C_{in}, H_{in}, W_{in})(Cin​,Hin​,Win​)

If it were me, I would try to modify your current code to work naturally with
a batch dimension. If that’s impractical, you could use unsqueeze() /
squeeze() to temporarily add a batch dimension to your tensor:

conv_output = self.conv1 (conv_input.unsqueeze (0)).squeeze (0)

Assuming that conv_input lacks a batch dimension, this will work in
version 1.10 while continuing to work in version 1.11.


K. Frank

Thanks mate! That should be it, I can’t try it right now, I just didn’t notice that flexibility difference