I am training a network and I put my network parameters and data loader on Cuda but when I run training, nvidia-smi shows 0% GPU usage with memory occupied. All the computation is running on CPU as CPU usage is very high. I checked every variable and networks parameters with “variable_name.is_cuda” and “next(net.parameters()).is_cuda” respectively in my training loop. They are all true. Can anyone help me regarding this issue?
1 Like
You will have to share the code for this one, or an abstract of it.
I am sending an abstract of my code. I am sending data and network to cuda but they are running on cpu. Can you help?
def to_var(self, x):
if torch.cuda.is_available():
x = x.cuda()
return Variable(x)
net = network.generator()
params = net.parameters()
optimizer = optim.Adam(params, lrG, [beta1, beta2], amsgrad=True)
if torch.cuda.is_available():
net.cuda()
for i, data in enumerate(t_loader):
source_img, target_img = data[0], data[1]
source_img = to_var(source_img)
target_img = to_var(target_img)
fake_generated_target = net(source_img)
g_loss = torch.mean((fake_generated_target- 1) **2)
optimizer.zero_grad()
g_loss.backward()
self.optimizer.step()```
You have to call “net = network.generator().cuda()” and give its paramters to the optimizer.