CUDA extensions, scalar_t float and int mix

Hello,

So basically i want to port this project https://github.com/ayanc/fdscs to pyorch.
As the project has custom kernel written for CUDA Tensorflow I would need to get these CUDA kernels to compile with pytorch instead.
I have been following the tutorial on how to write a CUDA extension for pytorch, but since it seems to only mention floats I can’t seem to get it to compile properly with integers and floats mixed.

I’m creating the template<typename scalar_t> but as I see it this basically gets translated to double internally. So what I need is a way to represent an integer which I guess should also be with some internal template?

And when dispatching ie. AT_DISPATCH_ALL_TYPES to allow all types which type should I write as the first argument of the function?

Hope you can help, and at least thanks for looking at my problem :slight_smile:

3 Likes

I think I found out a bit about scalar_t.

So I think that all arrays should be defined with scalar_t and AT_DISPATH_ALL_TYPES defines that scalar_t can be any type and should be checked for any array. If instead of ALL_TYPES are AT_DISPATCH_FLOAT_TYPES scalar_t is only checked against floating types. What is used if you only have integer types I have no idea.

2 Likes

I think you should use the AT_DISPATCH_INTEGRAL_TYPES that is defined at Dispatch.h if you only have integers.

And in a mixed situation, AT_DISPATCH_ALL_TYPES should do the work.

1 Like

@ignaciogavier : That is a good idea and thank you for providing the file link. Unfortunately, it is still not working with AT_DISPATCH_ALL_TYPES since you have to give a type as a first argument which apparently sets the type. Did you find a solution for all types in the mean time?