2017 and we still can't support AMD hardware, why?

Not all of us own NV hardware.

Some of us want to use these tools at home without having to make a new purchase or purchase some AWS time.

Not all of us are research scientists with fat grants allowing us to buy the latest NV hardware.

When will the vendor lock-in stop?

Surely “free” tools should be open to all.

2 Likes

https://www.quora.com/What-is-the-underlying-reason-for-AMD-GPUs-being-so-bad-at-deep-learning

Lots of work need to be done to reach Nvidia’s performance. Cudnn is a huge step for performance that AMD does not have yet.

There was OpenCL support for torch, but pragmatically everyone just turned to CUDA devices. If someone is willing to make a cudnn equivalent maybe it would change.

It’s also not only a software problem. But apparently AMD is planning on producing DL oriented hardware
http://www.amd.com/en-us/press-releases/Pages/radeon-instinct-2016dec12.aspx

Please see this thread for more details. We are waiting for HIP to be released from AMD.

https://github.com/pytorch/pytorch/issues/488

Here is 2019, not much has changed. Reviving this topic anyways. Tensorflow benchmarks for amd gpus are pretty impressive (are they misleading ?). This suggests that above comments are no longer true (?). What do you guys think is the reason for which we still can’t use amd gpus in pytorch ? http://blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/

https://rocm.github.io/pytorch.html

Did you have a good experience using ROCm?