Creating a new variable in the middle of the loss function to be added to the graph

Hi all,

I’m calculating the scatter-matrix for the output of the network, the output shape like this
batch_size=100
features=1000
tensor(100, 1000)

In the middle of calculation, I create a new tensor (that requires grad) to be added later to another tensor (requires grad).

The code works fine when I create the new tensor like this

S_w= torch.zeros((d,d),requires_grad=True).cuda()

but I want to create it in an elegant and safe way to be part of the graph not a leaf variable that can be created on CPU to have an option to move it or not to the CPU.

Here is my code snippet:

S_w= torch.zeros((d,d),requires_grad=True).cuda()   // Want to creat this in proper way

S_w +=lamb* torch.eye(*S_w.size(),out=torch.empty_like(S_w)) //Want to creat  the other tensor and add it in a proper way

for cl in range (C):
    Sc= torch.matmul(normalized_feat_for_each_class_tensor[y==cl].T,
                      normalized_feat_for_each_class_tensor[y==cl])

    Nc=torch.tensor(normalized_feat_for_each_class_tensor[y==cl].shape[0],dtype=torch.float32)
    S_w +=Sc/(Nc-1)/C