Hey ,
I had created a Unet
with custom encoder from segmentation-models-pytorch
. That custom encoder consisted of Module
s of its own (which was moved from another network). The result is something like,
Unet(
(encoder): Comma_Encoder()
(decoder): UnetDecoder(
(center): Identity()
(blocks): ModuleList(
(0): DecoderBlock(
(conv1): Conv2dReLU(
(0): Conv2d(131, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(attention1): Attention(
(attention): Identity()
)
(conv2): Conv2dReLU(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(attention2): Attention(
(attention): Identity()
)
)
(1): DecoderBlock(
(conv1): Conv2dReLU(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(attention1): Attention(
(attention): Identity()
)
(conv2): Conv2dReLU(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(attention2): Attention(
(attention): Identity()
)
)
)
)
(segmentation_head): SegmentationHead(
(0): Conv2d(32, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ConvTranspose2d(256, 6, kernel_size=(4, 4), stride=(4, 4))
(2): Activation(
(activation): Identity()
)
)
)
However, if I try to access layer
s of the encoder
by performing model.encoder
, I am getting nothing nothing
next(model.encoder.named_modules())
# ('', Comma_Encoder())
[_ for _ in model.encoder.children()]
# []
Whic is pretty weird since I can forward pass through it easily, and train it correctly too which pretty much implied I just can’t access its submodules.
This may be something with the Unet
class, but its inherited from nn.Module
too AFAIK (demonstrated here) - and I was able to access the layers of the encoder perfectly before plugging it into SMP…
Is there some sort of limit to the recurrence of some Module? I wanted to utilize accessing individual layers for skip connections which are vital for segmentation
Any help is appreciated…