How to use GPU while training a model?

I am running a code to train a resnet model on a kaggle notebook. I have chosen the accelerator as GPU so I haven’t made any mistakes there. I am training the model using the following code:

model.cuda()
for epoch in range(10):
  model.train(True)
  trainloss=0
  for x,y in trainloader:
    
    x,y=x.cuda(),y.cuda()

    yhat=model(x)
    optimizer.zero_grad()
    loss=criterion(yhat,y)
    loss.backward()
    optimizer.step()
    trainloss+=loss.item()
  
  print('Epoch {}  Loss: {}'.format(epoch,(trainloss/len(trainloader.dataset))))
  model.eval()
  testcorrect=0
  with torch.no_grad():
    for test_x,test_y in testloader:
      test_x,test_y=test_x.cuda(),test_y.cuda()
      yhat=model(test_x)
      _,z=yhat.max(1)
      testcorrect+=(test_y==z).sum().item()
      
print('Model Accuracy: ',(testcorrect/len(testloader.dataset)))

If you see I have used the .cuda() function on both my model as well as the tensors(inside the training part as well as validation part). However the GPU usage shown for the kaggle notebook is 0% while my CPU usage is up to 99%. Am I missing any code which is required to train the model using the GPU?

I use .to(device) methods and works perfectly fine for me,
Also to be sure that your script has accessible cuda, you can check by the following code

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(device)