High CPU usage when running my code on GPU

Hello everyone,

Although I did lots of research about my question, I could not solve the problem yet. I am new in PyTorch and I am trying to run basic code from GitHub for training GAN. Although all processes in the code are working on GPU, the CPU usage is 100% (even more) during training.

Here is the original code:

I updated this code and I changed the generator and discriminator. Besides, in order to use my data, I added the following data loading code that I checked on PyTorch official documentation.

def Read_LMDB(root, classes):
    d1 = torchvision.datasets.LSUN(root=root, classes=classes, transform=transforms.Compose(
                [transforms.Resize((opt.img_size, opt.img_size)), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]))
    dataloader = torch.utils.data.DataLoader(d1, batch_size=opt.batch_size, shuffle=True)
    return dataloader

def Read_from_folder(folder_name):
    imagefolder = datasets.ImageFolder(folder_name,transform=transforms.Compose(
        [transforms.Resize(opt.img_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]))
    dataloader = torch.utils.data.DataLoader(imagefolder,batch_size=opt.batch_size,shuffle=True)
    return dataloader

However, still the same problem. It does not matter which functions I used, the CPU usage is 100%. I think using such high CPU usage is not normal since the training is performed on GPU (and there is no problem with GPU usage. The GPU memory usage, utility, and the training speed are okay).

Could anyone make a suggestion to me that makes CPU usage less? Thank you!