I have a c++ extension that uses the PrivateUse1 backend key. I compiled pytorch release/1.11 from source after adding a new device. I also used gen_backend_stubs.py to register ops to this backend.
I am trying to run:
x = torch.randn(2,2)
y = x.to(‘testdev:0’)
But I get
NotImplementedError: Could not run ‘aten::empty.memory_format’ with arguments from the ‘PrivateUse1’ backend.
Is there a way I could use this dispatch key with the new device I added?
I get the same thing when I register empty.memory_format with AutogradPrivateUse1
If it helps the full error I see is:
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [Dense, Negative, ZeroTensor, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID,, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCPU, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].
CPU: registered at /localhome/abhullar/pytorch/build/aten/src/ATen/RegisterCPU.cpp:37386 [kernel]
Meta: registered at /localhome/abhullar/pytorch/build/aten/src/ATen/RegisterMeta.cpp:31637 [kernel]
QuantizedCPU: registered at /localhome/abhullar/pytorch/build/aten/src/ATen/RegisterQuantizedCPU.cpp:1294 [kernel]
BackendSelect: registered at /localhome/abhullar/pytorch/build/aten/src/ATen/RegisterBackendSelect.cpp:726 [kernel]
Python: registered at /localhome/abhullar/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at /localhome/abhullar/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/native/NegateFallback.cpp:22 [kernel]
ZeroTensor: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradCPU: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradCUDA: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradXLA: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradMPS: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradIPU: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradXPU: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradHPU: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradLazy: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradPrivateUse1: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradPrivateUse2: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
AutogradPrivateUse3: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:14210 [autograd kernel]
Tracer: registered at /localhome/abhullar/pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:14069 [kernel]
AutocastCPU: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/autocast_mode.cpp:324 [backend fallback]
Batched: registered at /localhome/abhullar/pytorch/aten/src/ATen/BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at /localhome/abhullar/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at /localhome/abhullar/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at /localhome/abhullar/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:137 [backend fallback]
I have registered ops using
TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) {
m.impl("empty.memory_format", &custom_empty_mem_format);
}
The issue was that I did not include my c++ extension when using it.