I wrote a custom loss and used GPU to train my model, but when calculating the loss, the CPU RAM usage continued to rise. I would like to know what caused this problem. (Is it related to computational graphs?)
The following is the code to reproduce the problem. Similar operations are performed in the loss function I designed.
import torch
import sys
import tqdm
b = torch.zeros(5).cuda()
d1 = 3*torch.zeros(1,requires_grad=True).cuda()
d2 = 3*torch.zeros(1,requires_grad=True).cuda()
d3 = 3*torch.zeros(1,requires_grad=True).cuda()
d4 = 3*torch.zeros(1,requires_grad=True).cuda()
d5 = 3*torch.zeros(1,requires_grad=True).cuda()
d6 = 3*torch.zeros(1,requires_grad=True).cuda()
d7 = 3*torch.zeros(1,requires_grad=True).cuda()
d8 = 3*torch.zeros(1,requires_grad=True).cuda()
for i in tqdm(range(1000000)):
b +=(d1*d2*d3*d4*d5*d6*d7*d8)