I haven’t explored the tutorial in detail, but from what I know state_action_values
are the output of the model, and should already require gradients.
Could you check it with state_action_values.requires_grad
?
Also, if you re-wrap a Tensor
, it will lose it’s associated computation graph and you are thus detaching it.
That’s the reason, why .grad
is empty in the example you’ve posted.