Is tensor.shape automatically computed gradient?

In the cource of computing loss function, tensor contain tensor.repeat(tensor.shape[0], 1, 1, 1). Is autograd of tensor.shape[0]?

Hi,

tensor.shape contains the size of a Tensor. Only the content of tensors are considered by the autograd.

Then, the weighted mask of fixed 2D tensor is computed autograd?

whichever tensor is computationally involved is considered for autograd. For example, weighted mask may be directly multipled with the 2D tensor, in which case, the autograd will compute gradient for mask as well (if mask’s requires_grad = True). In the other case (in your example) , tensor.shape is not used in any computation, but just as a constant to expand the dimensions of a particular tensor. Hence, there is no gradient calculation related to tensor.shape (shape is not an operation, but just an attribute to hold size of the tensor).

How to avoid autograd for 2D weighted mask?

Generally, you can set requires_grad=False to the mask .

Can you give us a code snippet to look into?
It is difficult to hypothesize or explain things without that.