I use some PyTorch functions like this.
pred=Net(input)
Tensor.clone()
torch.where()
torch.clamp()
loss.backward()
optim.step()
Those function doesn’t disconnect backward graph?
So, as long as I don’t use detach(), does gradient graph keep?
As long as I see return value from tensor.grad_fn, does it mean all gradient graphs are connected,
so, back propagation should be performed correctly?
I’m not sure when graph become disconnected
because even if I don’t use detach(), but when I use where(), clamp(), clone(),
sometimes it seems like update doesn’t work correctly.