'NoneType' object has no attribute 'data' when computing gradient on image pixels

i’m trying to write a simple image generator as follows:

def class_visualization_update_step(img, model, target_y, l2_reg, learning_rate):
    ########################################################################
    # TODO: Use the model to compute the gradient of the score for the     #
    # class target_y with respect to the pixels of the image, and make a   #
    # gradient step on the image using the learning rate. Don't forget the #
    # L2 regularization term!                                              #
    # Be very careful about the signs of elements in your code.            #
    ########################################################################
    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
    X = img.clone()
    X = X.requires_grad_()
    scores = model(X)
    R = l2_reg * X.norm() 
    scores = scores - R
    scores[0,traget_y].backward()
    dx = X.grad.data
    X.data = X.data + learning_rate * (dx / dx.norm())
    pass
 

and getting the following error when i run

UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
  """Entry point for launching an IPython kernel.
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-153-46dacb4c9ec4> in <module>
----> 1 X.grad.data

AttributeError: 'NoneType' object has no attribute 'data'

P.S
i know i still need to repair the correctness of the code by backpropagating with respect to the R term, but before that i want to understand what technically am i doing wrong to cause the X.grad to be a nonetype.

thanks!

As the warning explains you are trying to access the .grad attribute of a non-leaf tensor, which won’t be populated by default.
You can call .retain_grad() on the non-leaf Tensor to access the gradient after the backward call.

Also note that the usage of the .data attribute is not recommended, as Autograd won’t be able to track these operations which might yield to silent errors.

ok!
not sure why is X not a leaf tensor when i :

  1. initiated it myself is not a product of some computation
  2. i set requires_grad to true.

furthermore, perhaps you can shed light about the data attribute. i can’t figure out when to use it when updating a tensor.

I don’t know what might create the non-leaf tensor in your code, but this minimal code snippet seems to work:

x = torch.randn(1, 1)
x.requires_grad_()
print(x.is_leaf)

model = nn.Linear(1, 1)
out = model(x)
out.backward()
print(x.grad)

I wouldn’t recommend to use the .data attribute at all but instead wrap the code in a with torch.no_grad() block, if you want to apply an operation without tracking it via Autograd.
There were some use cases e.g. inside optimizers, which should have been removed by now.