How to pass Python device and dtype to C++

I’m trying to implement a C++ function that has the arguments of a Device and Dtype and will be called in Python. But I got error trying to achieve this.
Here’s my C++ code:

#define tensor torch::Tensorv
tensor fun1(tensor x,torch::Device device,torch::Dtype dtype) {
	auto opt = torch::TensorOptions(device).dtype(dtype);
	return x.to(opt);
}

Here’s the Python part:

a=torch.tensor([1])
fun1(a,torch.device('cpu'),torch.float)

Here’s the error:

    fun1(a,torch.device('cpu'),torch.float)
TypeError: fun1(): incompatible function arguments. The following argument types are supported:
    1. (arg0: at::Tensor, arg1: c10::Device, arg2: c10::ScalarType) -> at::Tensor

Invoked with: tensor([1]), device(type='cpu'), torch.float32

I don’t think this works directly right now.

If you have compiled torch yourself, you see that in torch/csrc/autograd/generated/python_torch_functions.cpp the factory functions take the Python objects and unpacks them.

The PyBind converters are only present for IntArrayRef and Tensor (toch/csrc/utils/pybind.h). You could try to register your own for torch::Device and torch::Dtype – or submit that PR (1.5 years ago, it was only Tensors).

A standard workaround could be to just pass a (potentially empty) tensor and take the device and dtype of that.

Best regards

Thomas

Ok, thank you very much. I’ll try that.

Actually, I found a way to pass dtype to C++.
In TouchScript(JIT), torch.dtype type are treated as python int, which corressponds to what happens in C++, as the dtypes are all ScalarType. So I can just create an C++ function that takes in an integer and make it an ScalarType to pass the dtype. But this method can only be called in TouchScript because only in TouchScript are dtypes treated as python int.

It’s just an follow-up. I recently encounter the similar situation. The workaround solution that I did is to use the str() to cast the torch.dtype into a C++ string, and in the python end write a wrapper around it:

Here is the C++ part:

torch::Tensor func1(const vector<int64_t> &shape, const std::string &dtype){
   // convert the string -> torch::ScalarType here. 
   auto options = torch.TensorOptions();
   if(dtype=="torch.float32"){
       options.dtype(torch::kFloat32);
   } // ... extend this

   return torch::Tensor(shape,options);

}

Here is the pybind wrapper part :

PYBIND11_MODULE(cytnx,m){
    m.def("c_func1",&func1);
}

Here is the python module in the directory structure:
cytnx/
|- __ init __.py
|- [the dynamic linking file compiled from pybind]

Inside __ init __.py:

from .cytnx import *

## this is the API expose to user
def func1(shape, dtype):
     # calling the wrapper from pybind
     return c_func1(shape,str(dtype),str(device))

I would hope there will be a direct support over this in the future

Another way to do this, which I realized later-on is you can pass the py::object, and then cast to the torch::Dtype (torch::ScalarType):

PYBIND11_MODULE(cytnx,m){
    m.def("func1",[](py::object dtype){
                   torch::ScalarType type = torch::python::detail::py_object_to_dtype(dtype);
             });
}
1 Like

Thank you for sharing!

If you cast the object to a dtype, you should first check that the conversion is valid.

if (THPDtype_Check(obj.ptr())) {
torch::ScalarType scalar_type = reinterpret_cast<THPDtype*>(obj.ptr())->scalar_type;
}