autograd


Topic Replies Created
Generating the class activation maps 3 April 18, 2019
How to backward only a subset of neural network parameters? (avoid retain_graph=True) 9 April 17, 2019
Passing a subset of the parameters to an optimizer equivalent to setting requires_grad of subset only to True? 2 April 17, 2019
How to memory efficiently increase neural network layers dynamically 6 April 15, 2019
Network Parameters Not Being Updated (loss.backward() and optimizer.step() not performing network update) 2 April 16, 2019
Torch autograd slow for self created activation function 3 April 17, 2019
Custom LR scheduler per sample instead of one for the whole dataset, how to couple gradients? 2 April 17, 2019
What does the backward() function do? 10 November 14, 2017
One-hot encoding 5 April 16, 2019
About Gradient, Loss when fine-tune 1 April 17, 2019
Leaf variable has been moved into the graph interior 12 May 25, 2018
Overriding nn.functional.conv2d 4 April 15, 2019
What do the grad_in and grad_out of nn.Conv2d consist of? 4 April 7, 2019
Implementing Custom nn.functional.conv2d 1 April 15, 2019
About detaching .backward function in pytorch 5 April 14, 2019
Expectation Maximisation for Variational Inference 1 April 14, 2019
MaxPool1d input gradient shape different from input 4 March 24, 2019
Problem in backward() function in customized loss function (autograd.Function) 1 April 14, 2019
How to define a information entropy loss? 6 April 13, 2019
Why I can't update the embedding layer? 3 April 13, 2019
Error in loss.backward() - merge_sort: failed to synchronize: unspecified launch failure 4 January 11, 2019
Looking for ways to speed this up 4 July 2, 2017
DataParallel(model) Multi- vs Single-GPU 5 January 27, 2018
Understanding where gradients are stored in backward 7 April 10, 2019
My Self Implemented BatchNorm + Relu gives NaN 2 April 11, 2019
ReLU the weights at the end of RNN with autograd computing the graident 6 April 6, 2019
Dynamically Add Parameters to Optimizer 3 December 25, 2017
Training an isolated part of a module 7 April 11, 2019
Persistency during subsequent backward calls 1 April 11, 2019
Training an autoencoder with a weighted loss where the loss weights are trainable and have norm 1 6 April 9, 2019