nn.Parameter is set to zero

Currently, the weight value is set with nn.parameter(randn). However, during the actual learning stage, the self.weight value continues to be 0.
I set param.requires_grad=True in train.py, but I don’t know why this error keeps happening.

below is model code.

‘’’

class SubjectLayers(nn.Module):

 def __init__(self, in_channels: int, out_channels: int, n_subjects: int, init_id: bool = False):
     super().__init__()
     self.weights = nn.Parameter(torch.FloatTensor(n_subjects, in_channels, out_channels), requires_grad=True)
     if init_id:
         assert in_channels == out_channels
         self.weights.data[:] = torch.eye(in_channels)[None]
 def forward(self, x, subjects):
     _, C, D = self.weights.shape
     #print("before", self.weights.data)
     weights = self.weights.gather(0, subjects.long().view(-1, 1, 1).expand(-1, C, D))
     #print("after", self.weights.data)
     return torch.einsum("bct,bcd->bdt", x, weights)
 def __repr__(self):
     S, C, D = self.weights.shape
     return f"SubjectLayers({C}, {D}, {S})"

‘’’

It is declared in the parent module as follows:

meg_dim = in_channels[“meg”]
dim = {“hidden”: hidden[“meg”], “input”: meg_dim}[subject_layers_dim]
self.subject_layers = SubjectLayers(meg_dim, dim, n_subjects, subject_layers_id)
in_channels[“meg”] = dim

I don’t understand this claim as you describe it as “continues to be 0” which implies it was already zero before or is the parameter update moving self.weight towards zero?

It is already zero before. When I print it in the forward statement, it is 0…

I set requires_grad to True in the code, but I don’t know why only nn.parameter becomes 0. When I declared nn.Embedding and output the weight, the weight value other than 0 was output correctly.

Is it because the subjectlayer class is a submodule? I don’t understand.