I am new to pytorch and I have written a custom
nn layer. I have two weight parameters which I have declared in the
__init__ function as follows.
self.weight_forward = nn.Parameter(torch.Tensor(self.length, self.config.emsize)) self.weight_backward = nn.Parameter(torch.Tensor(self.length, self.config.emsize))
Everything is working fine. I just want to know when I call
loss.backward(), whether these weight parameters get updated with other network parameters?
Please note, this custom layer is a part of my full model and working as per my expectation. I just want to make sure when pytorch does the backpropagation, whether it considers these weight parameters in the computational graph or not!