Hello! I have a network that takes as input 3 numbers and outputs 2 and I trained it on a given data set, to predict the 2 numbers. Now I would like to freeze all the layers of the network, and make the input trainable. Such that, for a given output (consisting of 2 numbers), the gradient descent will not touch the NN, but it will adjust the input such that I get the output I want. I am not sure how to do this in practice. After I trained the network I tried this:
for param in model.parameters():
param.requires_grad = False
input = Variable(torch.randn(bs, 3).cuda(), requires_grad=True)
lrs = 1e-2
optimizer = optim.Adam(input.parameters(), lr = lrs)
But it doesn’t allow me to call the optimizer like that for the input. How can I do what I need? Thank you!