About the autograd category (1)
nn.RNN with ReLU backprops differently under CPU and GPU (4)
Creating a reference to a tensor for gradient backprop without holding onto its data (1)
Memory leak when using forward hook and backward hook simultaneously (4)
Custom loss(weighted angular distance) in pytorch 0.4.1 (2)
Retain sub-graph (2)
How to call loss.backward in a if statement? (2)
Model parallelism in Multi-GPUs: forward/backward graph (6)
Backpropagation through a module without updating its parameters (1)
Autograd isn't functioning when networks's parameters are taken from other networks (20)
Inplace operation error with distributed gloo backend (1)
Why does gradients change every time? (3)
Export ONNX with static Shape instead of onnx::Shape operator (1)
F.grid_sample non-deterministic backward results (4)
How to get the gradient of parameters for different losses when multiple losses used (4)
Memory is not released even if I cannot access to tensors (1)
Gradients of FloatTensor remain None when tensor is initialized from Numpy array with dtype np.float64 (2)
How to get the gradient of output w.r.t. model parameters? (3)
How implement mutiply between a parameter and conv2d leayer? (3)
How to make sure gradient is correctly back-propagated after modify some intermedia tensor (1)
Why the forward results of all parallel computations are aggretated on GPU1 (1)
Torch.multiprocessing rebuild_cuda_tensor having trouble with bn.num_batches_tracked (2)
Confusion with LR Scheduler get_lr() (3)
Why the result of my model are all same? (4)
Autograd.grad dimension error (11)
Custom Autograd.Functions to GPU (2)
Manual seed cannot make dropout deterministic on CUDA for Pytorch 1.0 preview version (12)
How to use gpu to train (9)
Will computation graph be overrode when forward() was invoke multiple times before backward()? (2)
How can I find the source code of "torch.cumsum" (2)