I’m trying to use Autograd in order to optimize some parameters stored inside a tensor.
Let’s say I have a tensor of size 10 and I would like to optimize for only the first parameter of that tensor.
So I call:
torch.optim.Adam([variables[0]], lr=self.lr)
The problem is that when selecting the entry I performed an operation that is tracked by the autograd graph and I don’t have a leaf node anymore.
What can be an easy way to solve this?
I need this kind of setup because my variables contain lots of information that I need to visualize and compute the loss, but I don’t want to optimize for all the entries in the tensor.
Help is appreciated.