Hi guys,
I wonder whether there is a way to disable multi-GPU Peer2Peer Access in Pytorch.
Thanks!
Hi guys,
I wonder whether there is a way to disable multi-GPU Peer2Peer Access in Pytorch.
Thanks!
In case you are using NCCL, you could use NCCL_P2P_DISABLE=1
.
Thanks!
Any available solutions for cudaMemcpy (device to device)?
What is your use case? Are you using UVA with cudaMemcpyDeviceToDevice
?
If you don’t want to use p2p access, you could copy the data to the host and from the host to another device.
Thanks! That’s just a benchmark testing for research purpose.