I’m trying to implement a C++ function that has the arguments of a Device and Dtype and will be called in Python. But I got error trying to achieve this.
Here’s my C++ code:
If you have compiled torch yourself, you see that in torch/csrc/autograd/generated/python_torch_functions.cpp the factory functions take the Python objects and unpacks them.
The PyBind converters are only present for IntArrayRef and Tensor (toch/csrc/utils/pybind.h). You could try to register your own for torch::Device and torch::Dtype – or submit that PR (1.5 years ago, it was only Tensors).
A standard workaround could be to just pass a (potentially empty) tensor and take the device and dtype of that.
Actually, I found a way to pass dtype to C++.
In TouchScript(JIT), torch.dtype type are treated as python int, which corressponds to what happens in C++, as the dtypes are all ScalarType. So I can just create an C++ function that takes in an integer and make it an ScalarType to pass the dtype. But this method can only be called in TouchScript because only in TouchScript are dtypes treated as python int.
It’s just an follow-up. I recently encounter the similar situation. The workaround solution that I did is to use the str() to cast the torch.dtype into a C++ string, and in the python end write a wrapper around it:
Here is the C++ part:
torch::Tensor func1(const vector<int64_t> &shape, const std::string &dtype){
// convert the string -> torch::ScalarType here.
auto options = torch.TensorOptions();
if(dtype=="torch.float32"){
options.dtype(torch::kFloat32);
} // ... extend this
return torch::Tensor(shape,options);
}
Here is the python module in the directory structure:
cytnx/
|- __ init __.py
|- [the dynamic linking file compiled from pybind]
Inside __ init __.py:
from .cytnx import *
## this is the API expose to user
def func1(shape, dtype):
# calling the wrapper from pybind
return c_func1(shape,str(dtype),str(device))
I would hope there will be a direct support over this in the future