To(device('cuda:1')) still leake memory to cuda:0

I have model trained on cuda:0 pytorch 0.4.
But when I try to reload model on cuda:1 pytorch 1.2 its still shared betweed 0.1 gpus.
I have tried map_location to map from cuda to cpu, from cuda:0 to cuda:1.
Nothing works yet.

Are you sure the model is still on cuda:0, i.e. are you seeing the same amount of used memory on both GPUs?
The CUDA context might have been created on GPU0, which will take some memory.

If you don’t want to use GPU0 at all, you could just mask it via:


Note that GPU1 will be mapped to cuda:0 in that case.