|
How to know if gradients backpropagate through a pytorch function?
|
|
1
|
587
|
June 20, 2025
|
|
How pytorch treat with inplace operation in backward
|
|
1
|
78
|
June 17, 2025
|
|
[SDPA] RTX5080 is different from CPU calculation result in backward with long seq
|
|
0
|
60
|
June 17, 2025
|
|
[BUG] RTX5080: Function 'MmBackward0' returned nan values in its 0th output.
|
|
2
|
99
|
June 16, 2025
|
|
Broken autograd momentum link
|
|
1
|
74
|
June 16, 2025
|
|
JVP and checkpointing
|
|
1
|
88
|
June 16, 2025
|
|
Constant Predictions in Non-Linear Model Despite Training Progress
|
|
2
|
164
|
June 15, 2025
|
|
Loss.backward(): element 0 of tensors does not require grad and does not have a grad_fn
|
|
6
|
3211
|
June 15, 2025
|
|
Custom autograd.Function for quantized C++ simulator
|
|
2
|
111
|
June 13, 2025
|
|
Evaluating gradients of output variables w.r.t parameters for pixelwise models
|
|
2
|
90
|
June 12, 2025
|
|
Error 'Output 0 is independent of input 0' happens while using jacobian of a function that the output changes in my demo with different input
|
|
2
|
89
|
June 11, 2025
|
|
How to obtain the variable asociation relationship of FX graph between forward and backward?
|
|
3
|
74
|
June 11, 2025
|
|
How do pytorch deal with the sparse jacobian matrix in jvp/vjp during autograd?
|
|
1
|
744
|
June 9, 2025
|
|
Vmap mlp ensemble zero grads after update
|
|
2
|
74
|
June 8, 2025
|
|
More data than neurons with autograd?
|
|
3
|
108
|
June 7, 2025
|
|
Second Gradient Computation with autograd yield zeros
|
|
2
|
111
|
June 5, 2025
|
|
Symmetric parametrization
|
|
2
|
101
|
June 1, 2025
|
|
Vmap runtime error
|
|
0
|
54
|
May 29, 2025
|
|
Autograd: Add VJP and JVP rules for aten::aminmax #151186
|
|
0
|
54
|
May 25, 2025
|
|
How to test new native function
|
|
1
|
72
|
May 23, 2025
|
|
Blown up gradients and loss
|
|
3
|
983
|
May 22, 2025
|
|
Removing Return Statement in Module Forward Causes 30+ms Backward Slowdown - Why?
|
|
1
|
63
|
May 15, 2025
|
|
Linear solver for sparse matrices
|
|
7
|
1686
|
May 15, 2025
|
|
Why does merging all loss in a batch make sense?
|
|
7
|
2874
|
May 14, 2025
|
|
Autograd and Temporary Variables
|
|
4
|
219
|
May 12, 2025
|
|
Batthacaryya loss
|
|
12
|
3030
|
May 12, 2025
|
|
Autograd FLOP Calculation with Higher Order Derivatives
|
|
3
|
257
|
May 9, 2025
|
|
Gradient of a mixed network's output with respect to ONE tensor
|
|
2
|
93
|
May 7, 2025
|
|
Segfault in autograd after using torch lightning
|
|
1
|
117
|
May 4, 2025
|
|
How to interactively debug pytorch backprop errors?
|
|
2
|
157
|
May 2, 2025
|