Pytorch C++ - GPU Pinned Memory

In Caffe, we can allocate memory to the CPU with

caffe::Blob data;
data.Reshape({size});
write_to_pointer(data.mutable_cpu_data()); # This is pinned memory
data.gpu_data() # This copies from cpu_data to gpu_data and is very fast because of pinned memory

Does Pytorch C++ have the same paradigm? How can I correlate a torch::Tensor’s cpu and gpu pointer?
Lets say I already have a Tensor created like torch::Tensor

    c = torch::zeros({62, 9, 62*3}, {torch::requires_grad(false)}).cuda();

I want to now use the internal pinned cpu memory so that when i copy back to the internal gpu memory it will be faster.