NotImplementedError: Could not run 'aten::as_strided' with arguments from the 'SparseCUDA' backend.

NotImplementedError: Could not run ‘aten::as_strided’ with arguments from the ‘SparseCUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::as_strided’ is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

Hi,

This is a limitation of Pytorch’s sparse tensor backend. I’d ask this question on the right forum as this is not related to distributed features.

I’d suggest you provide test code that reproduces this issue as part of asking your question as there’s not enough context from just the error message to know what you’re trying to accomplish and what’s the best way to address it.

Given my understanding of how sparse tensors work, the concept of strides don’t apply to them so it makes sense you cannot use as_strided on them.