I am in the process of adding a new device to pytorch (it will only be used for inference). At this point, I only want to register this device and use it for defining a tensor and passing in across some torch APIs.
The use case would be something like this:
device = torch.device("XYZ") data = torch.rand(in_shape, device =device) result = torch.compile(backend="my_backend")(data)
As you can see, the tensor defined for this device (data in the example) is passed to torch.compile to be used by the compiler backend I’ve registered for _dynamo. All I want to do in my_backend with that device attribute is to check it’s value make some decisions if it is “XYZ” and not CPU. So, I don’t need to actually store the data tensor on the device or implement any specific tensor operations but just to pass the tensor and check that attribute.
I’ve registered the device and it’s corresponding XYZ dispatch key in c10/core following this repo:
Add support for the ONNX Runtime Eager Mode backend by abock · Pull Request #58248 · pytorch/pytorch (github.com)
device = torch.device("XYZ") returns XYZ to me, but the tensor initialization fails with this error:
Could not run ‘aten::rand’ with arguments from the ‘XYZ’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build)
Am I taking the correct path here? How to resolve this issue when I don’t need have any specific tensor implementation to create/store the tensor on my device? I basically want to use whatever is already on the cpu and just change the device attribute.
I’d appreciate any help with this.