Why does it think that it's a non-leaf tensor?

Hi everyone,
I’ve been trying to optimize images from a specific class and I created the tensor training_data_new myself (serves as a training dataloader). However, it still gives me the error “can’t optimize a non-leaf Tensor” on the line where I create optimizer2. I have tried casting it to float and just multiplying the image by 1.0. Still won’t budge. Does anyone have any ideas?

Here’s a snippet:

it = 0
for batch, (image, label) in enumerate(zip(training_data_new, training_data_new_label)):
    if label == 0:
        image.requires_grad_(True).float()
        image = image * 1.0
        for k in range(0, 5):
            optimizer2 = torch.optim.SGD([image], lr=0.1)
            loss2 = (((image) - (item_to_transform)) ** 2.0).sum()
            optimizer2.zero_grad()
            loss2.backward(retain_graph=True)
            optimizer2.step()
        training_data_new[it] = image
    it += 1

I’m a beginner but I’m trying to do my best.
Thank you in advance!

Hi Blu!

The seemingly innocuous image = image * 1.0 creates a new
image (that is then assigned to the python reference variable image)
that, being the result of a tensor computation, is no longer a leaf
variable.

Consider:

>>> import torch
>>> torch.__version__
'1.9.0'
>>> image = torch.zeros (3, 5, requires_grad = True)
>>> image.is_leaf
True
>>> image = image * 1.0
>>> image.is_leaf
False

Best.

K. Frank

thank you :slight_smile: After reading more about leaf tensors I understood that, so I wasn’t sure why I was told to multiply it by 1.0 by my supervisor as a solution (I had the same error before writing the line where I multiply with 1.0). But I have resolved it.