Cpp extension, cudaStream_t and pybind11


First off, I am running torch 0.4.1.post2 on a Linux box with Cuda 9.
I am currently in the process of writing a Cpp extension. Everything, so far, is going great !
However, I now need to specify a cuda stream on which I would like to run my cudaMemCpys and cuda kernel. For this, I would like to use a torch.cuda.Stream instance using the torch.cuda.current_stream()method.

Here comes the question: Is there a way to cast, using pybind11, the pytorch torch.cuda.Stream instance into a C++ cudaStream_t ?

Furthermore, I am aware of a at::CUDAStream class and I hoped that this would work, but nope ! Would have been too easy.

Great thanks for anyone who can help out !