As far as I understand, PyTorch is pretty conservative in terms of what goes executed on the GPU: you need to explicitly call cuda(), otherwise they will be executed on the CPU.
Is this true from the images loaded and transformed by the torchvision.datasets
? Let’s say that after having transformed the PIL image to tensor, I want to apply several transformation. Are these going to be executed automatically in cuda if available, or should we manually run a cuda()
command in the dataset? Would this work nicely with the dataloader?
Thanks for the clarification.