C++ Aten command for accessing GPU tensors


I am building a CUDA extension to python that uses PyTorch tensors. The tensors I want to access are allocated on the device (gpu tensors) from python. Inside of CUDA, I want to access the elements of one of these tensors from the host machine. This typically requires that the tensor is available on the cpu. In the python interface we have the .cpu() method available to tensors. The documentation says that this method is available in the C++ library, but when I try to access it, I get the following error:

/home/thompsjj/development/atomnet_v2/cuda/dev_2/voxelize.cu(145): error: class "at::Tensor" has no member "cpu"

So I’m really stuck here. How do I get the data from my gpu tensor, please?

Can you post the full code of your CUDA extension? I can take a look at it

Will, thank you for getting back to me. I just built a work around by casting the tensor to CPU and then using an accessor from the host code. That of course raises a bigger question about the efficiency of going back and forth between GPU and CPU and I have since abandoned this approach. I have a much, much bigger problem with the code I’m trying to compile now, so I’ll post that in a new topic.

[]-indexing should work for a CUDA tensor. For example:

auto options = torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCUDA, 0);
auto output = torch::rand({1, 3, 4, 5}, options);
for(int i = 0; i < img.rows; i++){
    for(int j = 0; j < img.cols; j++){
        output[0][0][i][j] = 1;
        output[0][1][i][j] = 2;
        output[0][2][i][j] = 3;
assert(output.device().type() == torch::kCUDA);