Optimizing this function using torch.autograd
def f(z):
return (z*z).sum()
using the following code starting at some initial z
.
z = torch.empty(3)
torch.nn.init.uniform_(z,-5,5)
z.requires_grad = True
optimizer = torch.optim.Adam([z], lr=0.1)
losses= []
zs =[] #to keep track of the variable as it gets optimized
print(z) #initial value of z
for i in range(50):
optimizer.zero_grad()
loss = f(z)
loss.backward(retain_graph = True)
optimizer.step()
losses.append(loss.detach().numpy())
zs.append(z.detach().data.numpy())
print(z) #final value of z
which gives output:
tensor([-1.4410, 4.9476, 3.3615], requires_grad=True)
tensor([-0.0837, 0.8619, -0.0701], requires_grad=True)
but the list zs
has the value
[array([-0.08373591, 0.8618752 , -0.07005788], dtype=float32),
array([-0.08373591, 0.8618752 , -0.07005788], dtype=float32),
...
array([-0.08373591, 0.8618752 , -0.07005788], dtype=float32)]
which is the final value repeated. Why does this happen and how can I append values of z for each epoch into a list.
I’m new to PyTorch and learning the basics.