Hello. Can i update the input tensor without updating network weight?
For example,
input = torch.rand(10,5) # minibatch size = 10, data dimension = 5
net = nn.Linear(5,3)
optimizer = optim.Adam(net.parameters(), lr=0.1)
target = torch.rand(10, 3) #minibatch size = 10, data dimension = 3
output = net(input)
loss = torch.pow(target-output, 2).sum()
loss.backward()
print(net.weight.sum()) # tensor(-1.0748, grad_fn=<SumBackward0>)
optimizer.step()
print(net.weight.sum()) # tensor(0.4252, grad_fn=<SumBackward0>)
this code, which is a typical case, updates network weight to minimize the loss function.
Can I update the input without updating network weight for minimizing the loss function?