CPU and GPU values synchronization


I’ve found out that calling .numpy() on a tensor and transferring it to GPU with .to(device) results in data synchronization between CPU and GPU after augmenting variable on GPU (saved numpy vector gets updated too). How does PyTorch achieve that? Is it always synchronizing variables between CPU and GPU? I doubt because that would be slow I guess.


Are you sure that you’re changing the GPU tensor? Remenber that the .to operation is not inplace and only the returned tensor will be on the GPU, the original one will remain on the CPU.

You are right. That makes sense now!

I thought that my operations were running on GPU but in fact .to operation was lengthy. I was augmenting CPU tensor in reality. Thanks a lot