Yes, this is correct. Your locally CUDA toolkit will be used if you build PyTorch from source or a custom CUDA extension. You won’'t need it to execute PyTorch workloads as the binaries (pip wheels and conda binaries) install all needed requirements. You would however need to install an NVIDIA driver to allow the communication with your GPU.
In this case you could skip installing these libs and could stick to PyTorch only.