Update input without updating network

Hello. Can i update the input tensor without updating network weight?
For example,

input = torch.rand(10,5)  # minibatch size = 10, data dimension = 5
net = nn.Linear(5,3)
optimizer = optim.Adam(net.parameters(), lr=0.1)
target = torch.rand(10, 3) #minibatch size = 10, data dimension = 3

output = net(input)

loss = torch.pow(target-output, 2).sum()

loss.backward()
print(net.weight.sum()) # tensor(-1.0748, grad_fn=<SumBackward0>)
optimizer.step()
print(net.weight.sum()) # tensor(0.4252, grad_fn=<SumBackward0>)

this code, which is a typical case, updates network weight to minimize the loss function.
Can I update the input without updating network weight for minimizing the loss function?

That should be possible, if you create your input using requires_grad=True and pass it to the optimizer.
Have a look at this cosine similarity example I’ve written yesterday.

1 Like