How to push the torch::KInt() to GPU?

Today I define a custom c++ ops for pytorch

I use a constant with type is torch::KInt()

When I load my module to the GPU

but there will a error about : two device of cuda:0 and cpu

so , how to push the torch::KInt to CUDA?

thx.

tensor = tensor.to(torch::kCUDA); should work.

awesome! thx @ptrblck

Dear friend

Is there a way to specify the gpu id in C++, like python : .to('cuda:0')?

Or how to get a tensor’s device in C++?

thx, @ptrblck

Yes, .to({torch::kCUDA, 0}); should work where the latter int is the device id.

awesome! , it can work.

thx @ptrblck