The parameters E and K are not getting updated when I try optimizing with a custom loss function. It does work on a simple loss like loss = E*K

How to fix this?

Thanks

```
# pytorch code to optimize two values E and K based on a loss function
\
```

The parameters E and K are not getting updated when I try optimizing with a custom loss function. It does work on a simple loss like loss = E*K

How to fix this?

Thanks

```
# pytorch code to optimize two values E and K based on a loss function
\
```

Re-wrapping a trainable tensor into a new tensor will detach it from the computation graph:

```
v1 = torch.tensor([(2*torch.log(E)+4*torch.log(K)), (2*torch.log(E)+2*torch.log(K)), 0.0],requires_grad = True)
```

Use `E`

and `K`

directly in differentiable operations without creating new tensors and it should work.

I changes it to `loss = torch.dot(torch.tensor([(2*torch.log(E)+4*torch.log(K)), (2*torch.log(E)+2*torch.log(K)), 0.0],requires_grad = True), torch.tensor([(2*torch.log(E)+2*torch.log(K)), (torch.log(E)+3*torch.log(K)), (torch.log(E)+torch.log(K))],requires_grad = True)) `

but still they’re not getting updated

You are still re-creating tensors and are thus still detaching the computation graph.

Remove the `torch.tensor`

calls in your code and use the tensors directly instead.

1 Like