Hello, it’s my first time encountering the UVA (Unified Virtual Addressing) feature in PyTorch. I would like to understand whether, in PyTorch, we only need to use torch.Tensor
to define a variable, and PyTorch will automatically set it as a UVA type for easy GPU access. Or do I need to set other parameters to meet this requirement?
Hi,
There is no UVA support in PyTorch. You should explicitly handle the move of your Tensors to and from the GPU. This more explicit control avoid subtle performance cliffs when using GPU.
@albanD is right, but it might also be worth checking the torch.cuda.CUDAPluggableAllocator
which might allow you to experiment with a custom allocator.
1 Like
I understand now. Thank you very much for your response.