Mixing CUDA and SparseCUDA

I am trying to implement a custom sparse linear layer but running into the following error below:

RuntimeError: Expected object of backend CUDA but got backend SparseCUDA for argument #2 'mat2'

What is the best approach to dealing with this mismatch? Do I need to convert everything (inputs and biases) to sparse tensors as well?

1 Like