Which backend is used by default on GPU/CPU?

which backend is used by default on GPU/CPU ?

The CPU is used by default and you can check it via creating a tensor without specifying the device: print(torch.randn(1).device).

Thanks for your reply. What I want to know is, which backend is used by default on GPU (nvidia 2080Ti). Is it cudnn or cuda?

cuDNN is a library to accelerate workloads, such as convolutions etc., and will be used if it’s available, while CUDA is used to execute code on the GPU.
The binaries ship with both while you would need to install cuDNN separately into your local CUDA toolkit if you want to build PyTorch from source to be able to use it.