About the autograd category
|
|
0
|
3926
|
May 13, 2017
|
Segfault in autograd after using torch lightning
|
|
1
|
16
|
May 4, 2025
|
How to interactively debug pytorch backprop errors?
|
|
2
|
11
|
May 2, 2025
|
Gradient of a mixed network's output with respect to ONE tensor
|
|
1
|
15
|
May 1, 2025
|
Gradient of a weight matrix returns NAN with respect to learnable parameter
|
|
2
|
28
|
May 1, 2025
|
Get loss, gradient and hessian in one go
|
|
1
|
15
|
April 30, 2025
|
How to clip the values of an optimizer?
|
|
3
|
21
|
April 30, 2025
|
Monitor optimizer step - Adam
|
|
1
|
20
|
April 30, 2025
|
Gradient of Tensor is Zero
|
|
4
|
39
|
April 28, 2025
|
Multiple forwards and comp graph building
|
|
2
|
12
|
April 23, 2025
|
My Discriminator model collapsed and always returns 1s
|
|
0
|
9
|
April 23, 2025
|
Autograd independently on entries of a single tensor
|
|
2
|
36
|
April 21, 2025
|
Optimize objective involving jacobian
|
|
1
|
21
|
April 21, 2025
|
Most efficient way to re-use grad computations in a layer which is a linear combination of linear layers
|
|
2
|
27
|
April 20, 2025
|
Simple use case: Compete per sample gradient with autograd
|
|
17
|
158
|
April 16, 2025
|
Is gradient flow lost when using Numpy?
|
|
3
|
43
|
April 15, 2025
|
Gradient computation performance
|
|
2
|
30
|
April 15, 2025
|
Autograd process isn't available when I profile via torch.profiler
|
|
3
|
20
|
April 15, 2025
|
Making autograd saved tensors hooks specific to certain arguments
|
|
7
|
39
|
April 14, 2025
|
How the autograd is implmented in pytorch
|
|
3
|
35
|
April 13, 2025
|
Custom autograd function breaking computation graph
|
|
2
|
29
|
April 10, 2025
|
Is grad_fn for a non-differentiable function that function's inverse?
|
|
1
|
29
|
April 9, 2025
|
Autograd FLOP Calculation with Higher Order Derivatives
|
|
2
|
87
|
April 9, 2025
|
Does tensor.register_post_accumulate_grad_hook() always fire once, or multiple times?
|
|
1
|
32
|
April 9, 2025
|
Selective gradient clipping
|
|
4
|
27
|
April 8, 2025
|
Is there a faster way to compute the jacobian than autograd.functional.jacobian?
|
|
4
|
47
|
April 7, 2025
|
Tackling Low GPU Kernel Occupancy During Loss Function Computation
|
|
1
|
27
|
April 7, 2025
|
Memory used by `autograd` when `torch.scatter` is involved
|
|
9
|
68
|
April 7, 2025
|
Is there a way to visualize the gradient path of the back propagation of the entire network
|
|
7
|
15218
|
April 4, 2025
|
Backward multiple forward passes
|
|
12
|
4490
|
April 3, 2025
|