Lots of work need to be done to reach Nvidia’s performance. Cudnn is a huge step for performance that AMD does not have yet.
There was OpenCL support for torch, but pragmatically everyone just turned to CUDA devices. If someone is willing to make a cudnn equivalent maybe it would change.
Here is 2019, not much has changed. Reviving this topic anyways. Tensorflow benchmarks for amd gpus are pretty impressive (are they misleading ?). This suggests that above comments are no longer true (?). What do you guys think is the reason for which we still can’t use amd gpus in pytorch ? http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/