I have a model like this:
class Om_msg(nn.Module):
def __init__(self):
super(Om_msg, self).__init__()
mode_clflen = [3, 128, 128]
self.padding_parameters = [torch.empty(t,requires_grad=True) for t in mode_clflen]
self.padding_parameters = [nn.init.normal_(w) for w in self.padding_parameters]
lin = nn.Linear(3,4)
It seems like defining trainable parameters manually like above does not register them to the model and if i use the model.state_dict
, the padding_parameters are not there and therefore i can not save them. The nn.ModuleList
, also did not help and throws an error since torch.empty is not a module but a tensor. what is the best way to register the padding_parameters
here?
Thanks.