JS_Lee  
                
                  
                    June 7, 2019,  8:06am
                   
                  1 
               
             
            
              Hi I made a simple code to track nn.Linear’s parameter’s update.
import torch
optimizer = torch.optim.Adam(qwe.parameters(), lr=0.001)
optimizer.zero_grad()
qwe = nn.Linear(1,1)
xx = torch.tensor(torch.ones(1,1), requires_grad = True)
yy =qwe(xx)
zz = yy**2
zz.backward()
print('xx requires grad: ',xx.requires_grad)
print(xx.grad) 
print('yy requires grad : ',yy.requires_grad)
print(yy.grad) 
print("qwe's weight:",qwe.weight.clone())
print("qwe's grad",qwe.weight.grad)
print('zz requires grad :',zz.requires_grad)
print(zz.grad) 
optimizer.step()
print("qwe's weight:",qwe.weight.clone())
The result is
xx requires grad:  True
tensor([[0.5291]])
yy requires grad :  True
None
qwe's weight: tensor([[0.3074]], grad_fn=<CloneBackward>)
qwe's grad tensor([[1.7211]])
zz requires grad : True
None
qwe's weight: tensor([[0.3074]], grad_fn=<CloneBackward>)
You can see qwe’s grad is not none but there was no update in qwe’s weight.
             
            
              
            
           
          
            
              
                albanD  
              
                  
                    June 7, 2019, 10:01am
                   
                  2 
               
             
            
              Hi,
I think you should give qwe.parameters() to the optimizer, not net.parameters().
             
            
              
            
           
          
            
              
                JS_Lee  
              
                  
                    June 7, 2019, 10:49am
                   
                  3 
               
             
            
              you are right…
qwe = nn.Linear(1,1)
before optimizer clause.
Thanks a lot for picking up my mistake
             
            
              
            
           
          
            
              
                weiwei  
              
                  
                    June 7, 2019, 11:03am
                   
                  4 
               
             
            
              hi, albanD, i have a question about share memory, can we disable share memory usage in pytorch? thank you very much.
             
            
              
            
           
          
            
              
                albanD  
              
                  
                    June 7, 2019, 11:18am
                   
                  5 
               
             
            
              Could you please open a new topic for unrelated questions please.
             
            
              
            
           
          
            
              
                weiwei  
              
                  
                    June 7, 2019, 11:25am
                   
                  6 
               
             
            
              hi, i am quite new to pytorch forum, so how can  i open a new topic for my question , i have start a question two day ago, but found no reply ? so , does my problem can not seen by you ? i will start a new topic :
  
  
    since i am not able to adjust the share memory usage in the remote server, can we disable share memory usage in pytorch. the same experiment run with tensorflow without shm size problem, so i just want to find a solution for this problem.
   
 
             
            
              
            
           
          
            
              
                weiwei  
              
                  
                    June 7, 2019, 11:26am
                   
                  7 
               
             
            
              i have open a new topic for the problem , thanks very much !