After taking a look at the C++ Extension guide, I was wondering, is there a way to write C++ extensions for AMD’s MIOpen library?
Normally, I would use the following to write add in some custom CUDA bit:
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
How difficult would it be to add something like a MIOpenExtension? If this would be cumbersome, would the developers recommend using the C++ frontend to interact with custom AMD accelerator kernels?
I think the idea is that one would re-use the CUDAExtension.
There is a patch regarding CUDAExtensions on ROCm (it seems to be stalled but you could use locally, probably, and it would seem to be a relatively future-proof way) :