How to train torchvision.models.Densenet169 on GPU?

Hello, I’m trying to load the densenet169 model from torchvision.models, and I want to run it on GPU. The program is running fine on CPU, but I wanted to do it on GPU.

Changes I made were:
Added the line device = torch.device(“cuda:0”) at top
after loading the densenet model, i did model_ft.to(device)
I did the same for images,labels = images.to(device),labels.to(device) in training loop.

But I’m getting the error as RuntimeError: Expected object of backend CPU but got backend CUDA for argument #4 ‘mat1’ which lead me to this link model.cuda doesn’t automatically detect layers in list.

I’m a newbie to Pytorch so how do I solve this error? I can’t share a lot of code as its my hw and turntin might say I copied from here. Please help!

Thanks!

Try to run the code using

CONDA_LAUNCH_BLOCKING=1 python script.py args

so that all CUDA operations will be synchronized. The error message should point you to the right line of code which is throwing the error. Once you found the line, check the device of the data using print(data.device) and make sure to push them to the device.
Also, if you are using vanilla Python lists to store some submodules, use nn.ModuleList instead, since Python lists won’t be pushed to the device by default.