About the autograd category
|
|
0
|
3957
|
May 13, 2017
|
How pytorch treat with inplace operation in backward
|
|
0
|
18
|
June 16, 2025
|
[BUG] RTX5080: Function 'MmBackward0' returned nan values in its 0th output.
|
|
2
|
13
|
June 16, 2025
|
Broken autograd momentum link
|
|
1
|
22
|
June 16, 2025
|
JVP and checkpointing
|
|
1
|
16
|
June 16, 2025
|
Constant Predictions in Non-Linear Model Despite Training Progress
|
|
2
|
20
|
June 15, 2025
|
Loss.backward(): element 0 of tensors does not require grad and does not have a grad_fn
|
|
6
|
1674
|
June 15, 2025
|
Custom autograd.Function for quantized C++ simulator
|
|
2
|
22
|
June 13, 2025
|
Evaluating gradients of output variables w.r.t parameters for pixelwise models
|
|
2
|
20
|
June 12, 2025
|
Error 'Output 0 is independent of input 0' happens while using jacobian of a function that the output changes in my demo with different input
|
|
2
|
26
|
June 11, 2025
|
How to obtain the variable asociation relationship of FX graph between forward and backward?
|
|
3
|
21
|
June 11, 2025
|
How do pytorch deal with the sparse jacobian matrix in jvp/vjp during autograd?
|
|
1
|
647
|
June 9, 2025
|
Vmap mlp ensemble zero grads after update
|
|
2
|
7
|
June 8, 2025
|
More data than neurons with autograd?
|
|
3
|
53
|
June 7, 2025
|
Second Gradient Computation with autograd yield zeros
|
|
2
|
35
|
June 5, 2025
|
Symmetric parametrization
|
|
2
|
29
|
June 1, 2025
|
Vmap runtime error
|
|
0
|
18
|
May 29, 2025
|
Autograd: Add VJP and JVP rules for aten::aminmax #151186
|
|
0
|
31
|
May 25, 2025
|
How to test new native function
|
|
1
|
29
|
May 23, 2025
|
Blown up gradients and loss
|
|
3
|
932
|
May 22, 2025
|
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [64, 1]], which is output 0 of AsStridedBackward0, is at version 3; expected version 2 instead. Hint: the backtrace further a
|
|
9
|
31574
|
May 19, 2025
|
Removing Return Statement in Module Forward Causes 30+ms Backward Slowdown - Why?
|
|
1
|
20
|
May 15, 2025
|
Linear solver for sparse matrices
|
|
7
|
1392
|
May 15, 2025
|
Why does merging all loss in a batch make sense?
|
|
7
|
2393
|
May 14, 2025
|
Autograd and Temporary Variables
|
|
4
|
120
|
May 12, 2025
|
Batthacaryya loss
|
|
12
|
2935
|
May 12, 2025
|
Autograd FLOP Calculation with Higher Order Derivatives
|
|
3
|
172
|
May 9, 2025
|
Gradient of a mixed network's output with respect to ONE tensor
|
|
2
|
47
|
May 7, 2025
|
Segfault in autograd after using torch lightning
|
|
1
|
61
|
May 4, 2025
|
How to interactively debug pytorch backprop errors?
|
|
2
|
82
|
May 2, 2025
|