Hi
I have something I didn’t catch, about gradients “propagation”. If I run the example below it works, I got some grad:
import torch
if name == ‘main’:
pose_opt_rotation_yaw = torch.nn.Parameter(torch.ones(10,3))
image_idx = 1
yaw = pose_opt_rotation_yaw[image_idx,0:1]
euler_angles = torch.stack((torch.zeros(1),torch.zeros(1),yaw))
euler_angles_ref = torch.tensor([0.0,0.0,0.0])
loss1 = (euler_angles - euler_angles_ref).sum()
loss1.backward()
print(pose_opt_rotation_yaw.grad)
but If replace:
euler_angles = torch.stack((torch.zeros(1),torch.zeros(1),yaw))
by
euler_angles = torch.tensor((torch.zeros(1),torch.zeros(1),yaw), requires_grad=True)
there is no more grad. What is the explanation ? what is the proper way to use tensor init with nn.parameter and constants ?
thanks