Our RTX2080ti GPUs do not support GPUDirect/RDMA anyway, so the only thing we want is to work out of the box in reasonable BW which will not become the bottleneck, we are doing P2P communication.
edit: well I see that here https://github.com/facebookincubator/gloo they say its supported, but I wonder if you still have anything further to say on integration with Pytorch.
We are about to test GLOO ibverbs transport over RDMA, and integrate with PyTorch on HPC scenarios. For now, GLOO ibverbs hasn’t been integrated to PyTorch yet.