autograd
About the autograd category
(1)
Is there any good documentation about handling torch::autograd::FunctionTask?
(4)
Inspecting gradients of a Tensor's computation graph
(8)
Does the nn.funtional.grid_sample backwards grad to grid?
(7)
Non scalar backward and self mini batch implementation
(6)
torch.Tensor.scatter_add_ error in backward function
(3)
Why inf loss when add .log_() to the output?
(4)
What's the difference between the two operations
(3)
Compute the Hessian matrix of a network
(4)
Slow torch::autograd::CopyBackwards for 5D tensor (3D images)
(5)
What does eval() do for BatchNorm at code aspect?
(3)
Repeated use of autograd.Function subclass
(2)
How to use the weights of a Conv2D in order to initialize the weights of another Conv2D?
(2)
Loss.backward() raises error 'grad can be implicitly created only for scalar outputs'
(5)
What does the function wrapper @once_differentiable do?
(5)
With torch.no_grad():
(3)
Why does gradients change every time?
(9)
Why does gradcheck fail for floats?
(2)
Memory use on list comprehension
(4)
How to detach() a rows of a tensor?
(3)
Optimizing a function with respect to specific arguments
(3)
[nonissue] Autograd fails when using half-precision - overflow on matrix size
(4)
What the difference of directly forward and separately forward of Sequential?
(6)
Loss Changes with torch.no_grad() ?!?
(6)
How in-place operation affect Autograd
(1)
Parameter Lists in Pytorch
(3)
One of the variables needed for gradient computation
(8)
Does model.eval() & with torch.set_grad_enabled(is_train) have the same effect for grad history?
(5)
How to use external libraries like OpenCV, NumPy, SciPy, etc in middle of forward/backward phase
(3)
Equivalence between CTC loss gradients
(3)
next page →