Problems using modified model._modules

I defined a standard ResNet50 and I have built a custom module (inheriting nn.Module) that modifies the weights of the convolutional layers. I have the following code which successfully changes the old modules to the new ones:

def modify_model(model, args):
    for m in model._modules:
        child = model._modules[m]
        if is_leaf(child):
            if isinstance(child, nn.Linear):
                model._modules[m] = modifyLinear(child, args)
            elif isinstance(child, nn.Conv1d):
                model._modules[m] = modifyConv1d(child, args)
            modify_model(child, args) # Used to convert submodules
    return model

And here is a stripped down version of the new module:

class modifying_module(nn.Module):
   def __init__(self, module):
       super(modifying_module, self).__init__()
       bias = True if module.bias is not None else False
       self.conv = nn.Conv1d(module.in_channels, module.out_channels, kernel_size=module.kernel_size,
                             stride=module.stride, padding=module.padding, dilation=module.dilation,
                             bias=bias, groups=1)

   def forward(self, x):
       masked_weight = self.conv.weight * self.mask
       x = F.conv1d(x, masked_weight, self.conv.bias, self.conv.stride,
                    self.conv.padding, self.conv.dilation, 1)
       return x

   def mask(self):
       return torch.ones(self.conv.weight.size())

As you can see, the “modifying_modules” for the convolutional modules defines a new nn.Conv1d according to the information from the base module. Alternatively, I have also used the following with no improvement:

self.conv = module.deepcopy()

The problem is, the error keeps saying that ‘“modifiying_module” has no attribute ‘weight’’ and marks the masked_weight line as the error. I have even tried copying the weight and registering it as a parameter explicitly. Then it says that there is no ‘bias’. So I tried registering every parameter and buffer for the convolution manually and it still gives me errors.

I don’t understand why it expects the “modifiying_module” to have an attribute ‘weight’ that is separate from the convolution’s weight when I am explicitly calling the weight from self.conv.

Any and all feedback would be greatly appreciated.

Could you post an executable code snippet, which would create this error, so that we could debug it?
I guess you might be running into an unexpected interaction with the base model, but would need to run the code to further isolate the issue. :slight_smile:

Thank you for your willingness to help. Today I put together some general standalone code to try to reproduce the error. I work with a very niche dataset, so I changed it to something more general since I can’t post my data. Unfortunately, this standalone code apparently works. So tomorrow I’ll try to get it (not) running again and see about stripping down the original code so that it still causes the error.