Which AT_DISPATCH_ - function can we take for mixed long and float type tensors (CUDA extension)?

There is a list of all DispatchTypes in DispatchTypes but it lacks any
but since I could not find any documentation on when to use wich AT_DISPATCH_ function, my question is the following:

How do I select the right function to dispatch both long and float tensors for CUDA?

There are older but finally unanswered question here
CUDA extensions, scalar_t float and int mix - PyTorch Forums and in ATen cuda kernel dispatch for type - PyTorch Forums.

It is actually quite important if you want to write a CUDA kernel with more than just one argument tensor type.


Potential solutions:
If we use AT_DISPATCH_SWITCH as

    AT_DISPATCH_SWITCH(self.scalar_type(), "op_name",
        AT_DISPATCH_CASE_INTEGRAL_TYPES([&] {
          op_integral<scalar_t>(iter);
        })
        AT_DISPATCH_CASE_FLOATING_TYPES([&] {
          op_floating<scalar_t>(iter);
        })
        AT_DISPATCH_CASE(kBool, [&] {
          op_bool(iter);
        })
    );

we appear to have three different operations (op_integral,op_floating,op_bool) instead of one of different dtypes.