Double dispatch for tensors

Is double dispatch automatically invoked for tensors to the appropriate kernels or does it have to be registered?

Could you give an example of your use case?
Are you writing a custom extension or are you referring to the PyTorch backend in general?

I am writing my own extension and I was wondering just for tensor creation torch.randn(2, 2, device) is dispatch stub always needed?

I noticed that not all devices in the device type / backend enums are handled in dispatch stub.

Ah OK.
Sorry, I’m not familiar enough with the DispatchStub implementation especially for new types. :confused: