Is torch.sparse_coo_tensor conversion to torchscript for running on mobile device supported?

Hi,

I tried to run a model containing a torch.sparse_coo_tensor compiled with torchscript on an Android device but got the following error message:

CppException: Could not run 'aten::_sparse_coo_tensor_with_dims_and_tensors' with arguments from the `SparseCPU` backend. 
This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 
`aten::_sparse_coo_tensor_with_dims_and_tensors` is only available for these backends: [BackendSelect, Functionalize, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy].

BackendSelect: registered at /home/anonymized/git/pytorch/build_android_arm64-v8a/aten/src/ATen/RegisterBackendSelect.cpp:726 [kernel]

Now Iā€™m wondering, are sparse tensors generally unsupported on mobile devices?
Or is there something else I have to do to get the sparse_coo_tensor running on device?

Thank you!

3 Likes