I’m trying to detach certain columns in a 2d tensor.
masked_encoding_prob = torch.randn(6272, 48, requires_grad=True)
num_group = 4
num_classes_per_group = 12
startcolumn = num_group * num_classes_per_group
endcolumn = startcolumn + num_classes_per_group
keep_grads = masked_encoding_prob[:, startcolumn:endcolumn]
masked_encoding_prob = masked_encoding_prob.detach()
print(masked_encoding_prob.requires_grad) #False
masked_encoding_prob[:, startcolumn:endcolumn] = keep_grads
print(masked_encoding_prob.requires_grad) #True
Is there a way to see which variables require gradients in a tensor individually? Or how would I go about verifying if this works?