I have a model code that defined as (model_baseline)
modules = []
for block in blocks:
for bottleneck in block:
modules.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body = Sequential(*modules)
self._initialize_weights()
def forward(self, x):
x = self.input_layer(x)
x = self.body(x)
x = self.output_layer(x)
return x
where blocks is
blocks = [
get_block(in_channel=64, depth=64, num_units=3),
get_block(in_channel=64, depth=128, num_units=13),
get_block(in_channel=128, depth=256, num_units=30),
get_block(in_channel=256, depth=512, num_units=3)
]
I want to access the output of block 0 in the body
during training (get_block(in_channel=64, depth=64, num_units=13),
) . How can I do it in pytorch? Note that, the network is loaded from a pretrained model
This is what I tried
modules_0 = []
modules = []
for block in blocks[0:1]:
for bottleneck in block:
modules_0.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body_0 = Sequential(*modules_0)
modules = []
for block in blocks[1:]:
for bottleneck in block:
modules.append(
unit_module(bottleneck.in_channel,
bottleneck.depth,
bottleneck.stride))
self.body = Sequential(*modules)
self._initialize_weights()
def forward(self, x):
x = self.input_layer(x)
x = self.body_0(x)
x = self.body(x)
x = self.output_layer(x)
However, the above method cannot use weight from pretrained model that trained on the model_baseline architecture