QuestionAnsweringModel not working with GPU

I am using QuestionAnsweringModel from SimpleTransformers. When I run my code, and review my processes in Windows Task Manager, python is not using my GPU at all. I have included a code snippet to recreate the problem. Any help is highly appreciated.

import torch
from simpletransformers.question_answering import QuestionAnsweringModel, QuestionAnsweringArgs

model_type=“bert”
model_name= “bert-base-cased”
model_args = QuestionAnsweringArgs()
train_args = {
“n_best_size”:1 ,
‘overwrite_output_dir’: True,
‘show_running_loss’:True,
‘n_gpu’: 3

}
model = QuestionAnsweringModel(model_type,model_name, args=train_args, use_cuda=True)

I have tried
model.to(torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”))
to set the device to cuda. But I am having no luck.
I have also looked at a topic on the same issue here before. Recommendations were to update pytorch. But I have already done that as well.

Thanks!

If model.to() was called with cuda:0 as its argument, the GPU will be used since PyTorch won’t move the parameters back to the CPU behind your back. Use nvidia-smi to check the GPU usage or the “CUDA” tab in the task manager, as it doesn’t show the compute utilization by default.

Thank you for the response! I have tried model.to(‘cuda:0’) before and I have checked nvidia-smi. One of my GPU gets used, but that has never run my python files. What I have not been able to check is CUDA under task manager. My task manager always has GPU usage under 1% and whatever it is using for is not python. All of my python files run on CPU. My naive assumption is that my notebook recognizes that GPU is installed. But when it comes to running the model on one of the GPUs, it does not do that. model.to(“cuda:0”), and torch.cuda.set_device(“cuda”) does not work to make that happen. torch.cuda.set_device(“cuda”) doesn’t throw any error. But model.to(“cuda:0”) throws error that says “‘QuestionAnsweringModel’ object has no attribute ‘to’”. So I need to find a workaround for this.

I’m unsure how and if it was working before, but it seems you might be facing different issues and would have to debug what QuestionAnsweringModel is (it doesn’t seem to be an nn.Module) and how to push it to the device.
In any case, once to('cuda') is called on the model and input, the GPU will be used and you would see it in nvidia-smi (unless an error is raised of course).
A low GPU utilization could be caused by different bottlenecks in your code, e.g. the data loading, a lot of small kernel launches etc.