Automatic fallback to cpu

Feature suggestion: enable automatic fallback for layers where mps implementations are not available (yet).

Case in point: I load a random pytorch model, do“mps”) and upon running the model get an error such as:

‘aten::_slow_conv2d_forward’ is only available for these backends: SparseCPU … …


We added such a feature in master already!
It will be in the next nightly build tomorrow (Saturday May 20th) and is already available if you build from source!

Hey @albanD, is there an ETA on the implementation of this operator? I’m curious how any of the CNN benchmarking was conducted without the presence of this operator? Thanks in advance.


The absence of this operator is linked to some binary incompatibilities with torchvision which confuses some systems.
This should work fine in the upcoming 1.12.1