Once CUDA is activated, is everything running on GPU?

For computations like torch.cos and torch.cat.

During the execution of these functions, is this possible that the operated data moved between CPU and GPU?

you can move between devices with the .to(device) method.

x = torch.randn(1) #on CPU 
x.to(torch.device('cuda')) # on GPU

NIT: you would have to reassign the tensor, since to() is not an inplace operation on tensors.

1 Like

Ah thanks for the correction! I assume that doesn’t apply when moving a net to GPU ? i.e. net.to('cuda') is ok?

Yes, that’s correct. The to() or cuda()/cpu() calls on nn.Modules will be applied recursively to all registered submodules, buffers, and parameters, so you don’t need to reassign the module.

1 Like