Error in saving gradient value to a python list

I new to pytorch ,so Implementing a simple linear regression .In the main for loop of epochs,i am taking the gradient of parameter and appending it to a python list.but in the list i am getting a single value repeated to the length of the list,and that value is the gradient value for last epoch.but instead of appending it to a list when i am printing it,its giving correct value.

grad_weight=[]
for epoch in range(250):
#1.Clear gradients w.r.t parameters
optimizer.zero_grad()
outputs=model(inputs)
loss=criterion(outputs,labels)
loss.backward()
grad=model.linear.weight.grad.data
print(“epoch :”,epoch," --> gradient :",grad)
grad_weight.append(grad)
optimizer.step()

23ac3dd8-1e15-4035-ae58-3186e9133f87

but the contain of the list is like that
grad_weight

bac2ad5e-3135-4bde-9c80-9f4663377f1a

When you do grad_weight.append(grad) python simply appends a reference to grad to the list grad_weight. So each item of grad_weight is the same - every item is a reference to the same grad tensor.

You need make a copy of the data when appending to the list.

grad_weight.append(grad.clone())
1 Like