Consuming other GPU's memory

When I run this code, I found this behavior.

import torch
x = torch.Tensor([1, 2, 3])
x = x.cuda(1)  # some memory on GPU id 1 is allocated
x = x.tolist()  # some memory on GPU id 0 is allocated 

Is this expected or not ?