Not quite, we have talked a lot about it though and the main issue is UVM has different semantics per vendor and it’s not quite clear what PyTorch should be doing
There’s a bunch of interesting history Support for NVIDIA UVM technology · Issue #44380 · pytorch/pytorch · GitHub
The MemPool
API should allow you using managed memory in case that’s the use case. However, since Jetson devices are using physically unified memory by design I’m unsure what the use case is since no information was given.
I was playing with GH200, with Qwen3-Next. But it was surprisingly to me that only take 96GB for H100. When GH200(480GB) is available