Detach certain columns of a tensor

I’m trying to detach certain columns in a 2d tensor.

masked_encoding_prob = torch.randn(6272, 48, requires_grad=True)

num_group = 4
num_classes_per_group = 12

startcolumn = num_group * num_classes_per_group 
endcolumn = startcolumn + num_classes_per_group 

keep_grads = masked_encoding_prob[:, startcolumn:endcolumn]

masked_encoding_prob = masked_encoding_prob.detach()

print(masked_encoding_prob.requires_grad)  #False

masked_encoding_prob[:, startcolumn:endcolumn] = keep_grads

print(masked_encoding_prob.requires_grad)  #True

Is there a way to see which variables require gradients in a tensor individually? Or how would I go about verifying if this works?

Hi,

I am afraid Tensors are seen as “elementary” objects from the point of view of the autograd and you can only control if the whole Tensor requires grad or not.

Note though that if some values are “constant” in there, they will get a gradient of 0 though so that should still give you the result you’re looking for no?