Hello,
I try to train an SSDA model.
The following is part of code for training.
data = torch.cat((im_data_s, im_data_t), 0)
target = torch.cat((gt_labels_s, gt_labels_t), 0)
output = G(data)
out1 = F1(output)
loss = criterion(out1, target)
loss.backward(retain_graph=True)
optimizer_g.step()
optimizer_f.step()
zero_grad_all()
output = G(im_data_tu) <================ lead to increase gpu memory
loss_t = adentropy(F1, output, 0.1)
loss_t.backward()
optimizer_f.step()
optimizer_g.step()
G.zero_grad()
F1.zero_grad()
zero_grad_all()
Result shows issue of memory raising:
Step:100
[Memory usage:2938.139136 MB]
Step:200
[Memory usage:3676.467712 MB]
Step:300
[Memory usage:4413.829632 MB]
Not sure what happen, is there any way to fix that problem?
Thanks!