For computations like torch.cos and torch.cat
.
During the execution of these functions, is this possible that the operated data moved between CPU and GPU?
For computations like torch.cos and torch.cat
.
During the execution of these functions, is this possible that the operated data moved between CPU and GPU?
you can move between devices with the .to(device)
method.
x = torch.randn(1) #on CPU
x.to(torch.device('cuda')) # on GPU
NIT: you would have to reassign the tensor, since to()
is not an inplace operation on tensors.
Ah thanks for the correction! I assume that doesn’t apply when moving a net
to GPU ? i.e. net.to('cuda')
is ok?
Yes, that’s correct. The to()
or cuda()/cpu()
calls on nn.Module
s will be applied recursively to all registered submodules, buffers, and parameters, so you don’t need to reassign the module.