Gradient and tensor initialisation with nn.parameter

Hi
I have something I didn’t catch, about gradients “propagation”. If I run the example below it works, I got some grad:

import torch

if name == ‘main’:
pose_opt_rotation_yaw = torch.nn.Parameter(torch.ones(10,3))
image_idx = 1
yaw = pose_opt_rotation_yaw[image_idx,0:1]
euler_angles = torch.stack((torch.zeros(1),torch.zeros(1),yaw))
euler_angles_ref = torch.tensor([0.0,0.0,0.0])
loss1 = (euler_angles - euler_angles_ref).sum()
loss1.backward()
print(pose_opt_rotation_yaw.grad)

but If replace:
euler_angles = torch.stack((torch.zeros(1),torch.zeros(1),yaw))
by
euler_angles = torch.tensor((torch.zeros(1),torch.zeros(1),yaw), requires_grad=True)

there is no more grad. What is the explanation ? what is the proper way to use tensor init with nn.parameter and constants ?
thanks

Re-creating a new tensor detaches it from the computation graph since you are creating a new leaf variable.

You should not re-initialize parameters and instead initialize them once in your model’s __init__ method. Using torch.stack/cat is the right approach to combine tensors and parameters in the forward pass.

thanks @ptrblck ,
I have checked pose_opt_rotation_yaw is still a leaf in the both case, yaw never, and the difference is on euler_angles.

I will avoid to create tensor in forward.
thanks