Gradient of X is NoneType in second iteration

Hi, I’m trying to make images which will fool model, but I have some problem with this code. In the second iteration I get `TypeError: unsupported operand type(s) for -: 'Tensor' and 'NoneType' `
Why is the grad NoneType even if it’s working for the first time?

``````X_fooling = X.clone()
loss_f = torch.nn.MSELoss()

for i in range(1000):
score = model(X_fooling)
y = torch.zeros(1000)
y[target_y] = 1
loss = loss_f(score, y)
print(loss)
loss.backward()

if target_y == torch.argmax(score):
break

``````

You are replacing the leaf variable:

``````X_fooling = X.clone()
``````

with a non-leaf:

``````X_fooling = X_fooling - X_fooling.grad
``````

I’m unsure if you want to recreate the leaf variable, but you could use:

``````X_fooling.detach().requires_grad_()
``````

to recreate the leaf variable.

Thank you, I solved the problem with usage of optimizer. I haven’t thought about replacing of the tensor