Hi, I am trying to replace layers in a defined model with another type of layer, but with some extra parameters. An example is something like this:
import torch
import torch.nn as nn
from torchvision.models import resnet18
class ModA(nn.Module):
def __init__(self, layer):
super(ModA, self).__init__()
self.layer = layer
self.paraA = nn.Parameter(torch.tensor(1.0))
def forward(self, input):
return self.layer(self.paramA * input)
net = resnet18(weights="IMAGENET1K_V2")
layer_names = []
layers = []
for n, m in net.named_modules():
if isinstance(m, nn.Conv2d):
layer_names.append(n)
layers.append(ModA(m))
for n, m in zip(layer_names, layers):
setattr(net, n, m)
net.to('cuda:0')
net = torch.nn.parallel.DistributedDataParallel(net, device_ids=[gpus])
It will report some error messages like this.
If I do not have the extra parameter self.paraA
it will work fine (but will print some message like this if I still do this module change). It seems to me that the reason is although I changed the module manually in the model, I have not “inserted” the extra parameter self.paraA
into model.parameters
. I wonder what is the most appropriate practice to replace layers in a predefined model with custom modules. Thanks.