I have a question about how to optim an defined matrix by myself

The sample code is blow:
class Test(nn.Module):
def init():
super(Test,self).init
self.p=torch.randn(10).require_grad_(True)
def forward(self,x):
return self.p*x

I define a class like Test, now the question is how to optim self.p at the loss backward stage, although I use the flag of True to make self.p can be autograd, but the value of self.p do not change afther every loss backward and optimizer.step(), so is there any experts can help me?

I know how to solve it

If you want it to be properly picked up when you do model.parameter(), you need to make it a nn.Parameter:

self.p = nn.Parameter(torch.randn(10))

Yes,it can be work,thank you for your help