Weight doesn't update even though weight.grad is not none

(Js Lee) #1

Hi I made a simple code to track nn.Linear’s parameter’s update.

import torch


optimizer = torch.optim.Adam(qwe.parameters(), lr=0.001)
optimizer.zero_grad()

qwe = nn.Linear(1,1)

xx = torch.tensor(torch.ones(1,1), requires_grad = True)
yy =qwe(xx)
zz = yy**2

zz.backward()

print('xx requires grad: ',xx.requires_grad)
print(xx.grad) 
print('yy requires grad : ',yy.requires_grad)
print(yy.grad) 
print("qwe's weight:",qwe.weight.clone())
print("qwe's grad",qwe.weight.grad)
print('zz requires grad :',zz.requires_grad)
print(zz.grad) 
optimizer.step()
print("qwe's weight:",qwe.weight.clone())

The result is

xx requires grad:  True
tensor([[0.5291]])
yy requires grad :  True
None
qwe's weight: tensor([[0.3074]], grad_fn=<CloneBackward>)
qwe's grad tensor([[1.7211]])
zz requires grad : True
None
qwe's weight: tensor([[0.3074]], grad_fn=<CloneBackward>)

You can see qwe’s grad is not none but there was no update in qwe’s weight.
How can I see weight’s update ?
Is it possible to catch weight update?

(Alban D) #2

Hi,

I think you should give qwe.parameters() to the optimizer, not net.parameters().

(Js Lee) #3

you are right…
and I should have put

qwe = nn.Linear(1,1)

before optimizer clause.

Thanks a lot for picking up my mistake

(weiwei) #4

hi, albanD, i have a question about share memory, can we disable share memory usage in pytorch? thank you very much.

(Alban D) #5

Could you please open a new topic for unrelated questions please.

(weiwei) #6

hi, i am quite new to pytorch forum, so how can i open a new topic for my question , i have start a question two day ago, but found no reply ? so , does my problem can not seen by you ? i will start a new topic :
the topic link is show here:

(weiwei) #7

i have open a new topic for the problem , thanks very much !