Get encoder from trained UNet

Thanks for the notebook.
The DownConv layer returns a tuple in this line of code, which doesn’t work in an nn.Sequential container and standard layers.
If you want to accept the tuple in the next layer, you could e.g. write a custom layer, which unwraps the tuple internally.

1 Like

So, something interesting is happening.

I changed the line you mentioned to this. I hope I did the correct thing.

What’s happening is, the code works on my machine but not on Colab. Even though torch versions are the same, why would this happen?

What kind of error are you getting on Colab?

Really sorry about that. I thought I posted the error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-16-621fab09e884> in <module>()
      1 with torch.no_grad():
----> 2     test_modules(img)

6 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
    414                             _pair(0), self.dilation, self.groups)
    415         return F.conv2d(input, weight, self.bias, self.stride,
--> 416                         self.padding, self.dilation, self.groups)
    417 
    418     def forward(self, input: Tensor) -> Tensor:

TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple

Did you import the new model from the branch of the repository containing this change?
Your current Colab notebook still seems to checkout the master branch.

I did. I pushed the changed line again and it works. The only change I had to make was this line. I deleted the extra line between the if and return, which doesn’t make sense.

Why do you think that happened?