Why my deep learning model is taking even more time in colab?

I am using google colab to run my deep learning model. I dont know why is it taking more time in colab/
I am using this snippet for device mounting in training the model. Plz suggest something , its taking 30 min for 1 epoch.
if args.use_gpu:

use_cuda = torch.cuda.is_available()

device = torch.device("cuda" if use_cuda else "cpu")



@ptrblck plz help sir

Is the notebook setup to use a GPU? Because device will still hold "cpu" if the settings aren’t changed.

1 Like

yes it is.
But no change in training duration…

The difference between Colab and your local setup would come from all used processing units, i.e.:

  • used CPU
  • data storage and bandwidth to access it
  • used GPU
  • different libraries

You could profile the code to check, which part of the training takes most of the time.

Thanks for reply sir,
But I am not getting last line, How to profile the code?
Actually its taking more time then my local machine.

You could add timers to the code and check e.g. how long the data loading needs as done in the ImageNet example. Also, the torch.autograd.profiler might be useful.