How i can compute the gradient of the loss with respect to the parameters of the network?

Hi Periklis!

Autograd to the RESCUE!

Specifically, compute a `loss`

that depends on your network, call

`loss.backward()`

, and autograd will do the rest. You can examine the

`.grad`

properties of your network parameters. (This requires that your

network parameters have `requires_grad = True`

and that you compute

your loss using (usefully differentiable) pytorch tensor operations.

Consider:

```
>>> import torch
>>> torch.__version__
'1.10.2'
>>> _ = torch.manual_seed (2022)
>>> network = torch.nn.Linear (2, 3, bias = False)
>>> network.weight
Parameter containing:
tensor([[-0.1474, 0.5967],
[ 0.3660, -0.1681],
[-0.6700, -0.1989]], requires_grad=True)
>>> input = torch.arange (10.).reshape (5, 2)
>>> input
tensor([[0., 1.],
[2., 3.],
[4., 5.],
[6., 7.],
[8., 9.]])
>>> loss = (network (input)).sum()
>>> loss.backward()
>>> network.weight.grad
tensor([[20., 25.],
[20., 25.],
[20., 25.]])
```

Best.

K. Frank

Thank you so much for your answer, however my model has no this .weight .

On the other hand i tried to do something like

for param in model.parameters():

grads.append(param.grad.view(-1))

My goal is to compute the \nabla_{\theta} l(f(x;\theta),y)

where l: is the loss function

f(x;\theta): is the prediction of the model for the input x parametrized by parameters \theta