CUDA extensions, scalar_t float and int mix

Hello,

So basically i want to port this project https://github.com/ayanc/fdscs to pyorch.
As the project has custom kernel written for CUDA Tensorflow I would need to get these CUDA kernels to compile with pytorch instead.
I have been following the tutorial on how to write a CUDA extension for pytorch, but since it seems to only mention floats I can’t seem to get it to compile properly with integers and floats mixed.

I’m creating the template<typename scalar_t> but as I see it this basically gets translated to double internally. So what I need is a way to represent an integer which I guess should also be with some internal template?

And when dispatching ie. AT_DISPATCH_ALL_TYPES to allow all types which type should I write as the first argument of the function?

Hope you can help, and at least thanks for looking at my problem :slight_smile:

2 Likes

I think I found out a bit about scalar_t.

So I think that all arrays should be defined with scalar_t and AT_DISPATH_ALL_TYPES defines that scalar_t can be any type and should be checked for any array. If instead of ALL_TYPES are AT_DISPATCH_FLOAT_TYPES scalar_t is only checked against floating types. What is used if you only have integer types I have no idea.

1 Like