The sample code is blow:
class Test(nn.Module):
def init():
super(Test,self).init
self.p=torch.randn(10).require_grad_(True)
def forward(self,x):
return self.p*x
I define a class like Test, now the question is how to optim self.p at the loss backward stage, although I use the flag of True to make self.p can be autograd, but the value of self.p do not change afther every loss backward and optimizer.step(), so is there any experts can help me?