autograd


Topic Replies Activity
About the autograd category 1 May 13, 2017
Runtime error in custom Autograd function 1 January 22, 2020
Batch Optimization of "Input Samples" 2 January 22, 2020
Tensor.set_ seems not to correctly update metadata if the shape changes 1 January 22, 2020
Runtime Error in .backward() 3 January 21, 2020
Double Backpropagation in Pytorch 2 January 21, 2020
Softmax output has zero grad 2 January 21, 2020
Confused about torch.max() and gradient 10 January 21, 2020
How to swap tensor elements and keep gradient 3 January 21, 2020
Guided backprop - single ReLU module 2 January 21, 2020
Computing Hessian for loss function 4 January 21, 2020
Training siamese and triplet networks: stacking vs multi pass 2 January 20, 2020
What's the current idiomatic way to SGD stepping w/o using x.grad.data? 5 January 20, 2020
How to apply modified gradient to optimizer 6 January 20, 2020
Fine tune only parameters in optimizer 4 January 20, 2020
What is the difference between b and e? 2 January 20, 2020
torch.autograd.Function overwrite 17 January 19, 2020
Samples selection in transfer learning 1 January 19, 2020
Polar coordinates transformation layer 6 January 18, 2020
Possible reasons for regression problem with almost same prediction result 10 January 18, 2020
Detach tensors from computation graph 1 January 17, 2020
Implementing sech 3 January 17, 2020
How to handle BPTT with mini-batches 1 January 16, 2020
Is Loss.backward() function calculate gradients over mini-batch 3 January 16, 2020
Why does this function break the gradient tree? :thinking: 12 January 14, 2020
Best/clean way to define your own loss function 2 January 16, 2020
Efficient computation with multiple grad_output's in autograd.grad 8 January 15, 2020
Random forest through back propagation 3 January 14, 2020
How to find best solution to backward error 8 January 14, 2020
About using two optimizers 2 January 14, 2020