GPU to cpu tensor transfer

I have two GPUs, and I am using cuda:1 for validation. The model and dataset ran in cuda:1. But when I use the .to(‘cpu’) it uses some memory from cuda:0. Is this normal or I am doing some mistake ?

Hi,

It may. You want to make sure that you set the cuda device to be 1 to avoid using the 0th one.
Note that if you never want to use the other device, the best way is to set the env variable CUDA_VISIBLE_DEVICES=1 and you won’t even see the GPU 0 anymore.