PyTorch C++ Deployment Story: 2019

Thanks for clarification. However I am wondering which category devices like AMD-gpu systems and Macs belong to.

I consider Torchscript an extremely flexible and powerful tool and I am sad to see it go in the long term.

If I am right, onnx export uses a Torchscript graph internally, so this will be replaced by a new solution too?

So ONNX exporter is moving towards a dynamo based workflow, you can follow plans here [ONNX] Isolate TorchScript-based code-base from Dynamo-based ONNX exporter for easier deprecation · Issue #103965 · pytorch/pytorch · GitHub

Are there any tutorials on deploying PyTorch models in C++ using torch._dynamo.export?

No it hasn’t been officially released yet