About the autograd category
|
|
0
|
3882
|
May 13, 2017
|
How does Pytorch handle in-place operations without losing information necessary for backpropagation?
|
|
3
|
17
|
February 17, 2025
|
Gradient computation with PyTorch autograd with 1th and 2th order derivatives does not work
|
|
1
|
39
|
February 15, 2025
|
Free some saved tensors after partial backward
|
|
6
|
86
|
February 14, 2025
|
How does autograd merges 'parallel paths'?
|
|
3
|
32
|
February 13, 2025
|
[Solved][Pytorch1.12] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
|
|
0
|
17
|
February 13, 2025
|
Is grad_fn for a non-differentiable function that function's inverse?
|
|
0
|
14
|
February 12, 2025
|
Differentiating With Respect to Learning Rate
|
|
3
|
85
|
February 12, 2025
|
Calling autograd.Function in autograd.Function
|
|
4
|
44
|
February 11, 2025
|
About torch.autograd.set_detect_anomaly(True):
|
|
5
|
24070
|
February 10, 2025
|
Vmap over autograd.grad of a nn.Module
|
|
6
|
974
|
February 10, 2025
|
Is_grads_batched
|
|
3
|
2520
|
February 8, 2025
|
Error: Implementing Custom Activation Function(TERLU) using this paper https://arxiv.org/pdf/2006.02797
|
|
1
|
67
|
February 7, 2025
|
Question about the Extending PyTorch tutorial
|
|
1
|
18
|
February 7, 2025
|
How to reduce the for loop with torch.einsum function?
|
|
1
|
28
|
February 7, 2025
|
Freezing CNN Channels
|
|
2
|
84
|
February 7, 2025
|
Softmax returning only 0 and 1
|
|
1
|
26
|
January 28, 2025
|
Gradcheck fails for custom activation function
|
|
3
|
49
|
January 26, 2025
|
Missing argument create_graph in torch.func api
|
|
6
|
122
|
January 24, 2025
|
Backward pass error for loss computation in loop
|
|
1
|
21
|
January 23, 2025
|
.grad should not equal None here
|
|
3
|
100
|
January 23, 2025
|
Unexpected behavior when using torch.autograd.functional.jacobian with multiple inputs/outputs neural network
|
|
3
|
32
|
January 21, 2025
|
Loss.backward() called after torch.nograd()
|
|
4
|
37
|
January 19, 2025
|
Where is the actual code for LayerNorm (torch.nn.functional.layer_norm)
|
|
6
|
3854
|
January 17, 2025
|
Why is requires_grad==False after multiplication?
|
|
9
|
509
|
January 16, 2025
|
Is it possible to have trainable module parameters in between static layer weights?
|
|
1
|
15
|
January 14, 2025
|
Torch.autograd.grad and masking issue
|
|
0
|
27
|
January 14, 2025
|
Unexpected behavior when using batch_jacobian with multiple inputs/outputs in quantum-classical neural network
|
|
2
|
35
|
January 12, 2025
|
Autograd FLOP Calculation with Higher Order Derivatives
|
|
0
|
46
|
January 11, 2025
|
Backward multiple forward passes
|
|
11
|
4341
|
January 11, 2025
|