Can cuda unified memory be leveraged in tensors?

when declaring tensor in cpu, numpy conversions are by default references of the tensor.
This can be useful in some cases to leverage stuff written in numpy.

when declaring tensor in cuda that isnt so. Is there any way of making this happen?
or
do i have to switch to caffe2 and if so, then are there any examples of this?

using pytorch 1.1. ubuntu 16