About the autograd category
|
|
0
|
3958
|
May 13, 2017
|
Can't vmap autograd.grad over outputs
|
|
3
|
18
|
June 18, 2025
|
Why does autograd.backward go one edge further than `inputs`?
|
|
1
|
9
|
June 17, 2025
|
How pytorch treat with inplace operation in backward
|
|
1
|
30
|
June 17, 2025
|
[SDPA] RTX5080 is different from CPU calculation result in backward with long seq
|
|
0
|
8
|
June 17, 2025
|
[BUG] RTX5080: Function 'MmBackward0' returned nan values in its 0th output.
|
|
2
|
19
|
June 16, 2025
|
Broken autograd momentum link
|
|
1
|
23
|
June 16, 2025
|
JVP and checkpointing
|
|
1
|
20
|
June 16, 2025
|
Constant Predictions in Non-Linear Model Despite Training Progress
|
|
2
|
24
|
June 15, 2025
|
Loss.backward(): element 0 of tensors does not require grad and does not have a grad_fn
|
|
6
|
1688
|
June 15, 2025
|
Custom autograd.Function for quantized C++ simulator
|
|
2
|
23
|
June 13, 2025
|
Evaluating gradients of output variables w.r.t parameters for pixelwise models
|
|
2
|
23
|
June 12, 2025
|
Error 'Output 0 is independent of input 0' happens while using jacobian of a function that the output changes in my demo with different input
|
|
2
|
27
|
June 11, 2025
|
How to obtain the variable asociation relationship of FX graph between forward and backward?
|
|
3
|
22
|
June 11, 2025
|
How do pytorch deal with the sparse jacobian matrix in jvp/vjp during autograd?
|
|
1
|
651
|
June 9, 2025
|
Vmap mlp ensemble zero grads after update
|
|
2
|
8
|
June 8, 2025
|
More data than neurons with autograd?
|
|
3
|
55
|
June 7, 2025
|
Second Gradient Computation with autograd yield zeros
|
|
2
|
36
|
June 5, 2025
|
Symmetric parametrization
|
|
2
|
30
|
June 1, 2025
|
Vmap runtime error
|
|
0
|
18
|
May 29, 2025
|
Autograd: Add VJP and JVP rules for aten::aminmax #151186
|
|
0
|
31
|
May 25, 2025
|
How to test new native function
|
|
1
|
29
|
May 23, 2025
|
Blown up gradients and loss
|
|
3
|
937
|
May 22, 2025
|
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [64, 1]], which is output 0 of AsStridedBackward0, is at version 3; expected version 2 instead. Hint: the backtrace further a
|
|
9
|
31599
|
May 19, 2025
|
Removing Return Statement in Module Forward Causes 30+ms Backward Slowdown - Why?
|
|
1
|
20
|
May 15, 2025
|
Linear solver for sparse matrices
|
|
7
|
1396
|
May 15, 2025
|
Why does merging all loss in a batch make sense?
|
|
7
|
2399
|
May 14, 2025
|
Autograd and Temporary Variables
|
|
4
|
120
|
May 12, 2025
|
Batthacaryya loss
|
|
12
|
2935
|
May 12, 2025
|
Autograd FLOP Calculation with Higher Order Derivatives
|
|
3
|
173
|
May 9, 2025
|