Recently Pytorch had announced Pytorch 2.0, which is awesome
- It canonicalizes 2000+ primitives to 250+ essential ops, and 750+ ATen ops. Which reduce the implementation code by at least about a half
- It use TorchDynamo which improves graph acquisition time
- It uses faster code generation through TorchInductor
However, as my understanding goes mps support is in very early stage, and according to General MPS op coverage tracking issue there seems to be a lot more Aten Ops to be implemented.
So did Pytorch 2.0 change any codebase or direction that had been going since Pytorch 1.12 (I meant is TorchDynamo compatible with current mps graph acquisition methods?)?
Also TorchInductor will compile into Triton, which seems to only support CUDA (, and maybe ROCm in the future). If Pytorch 2.0 came along, how this going to affected the mps support?
Sorry if I misunderstand something, I just want to know the general direction. Thanks