Is PyTorch (and related CV libraries) optimized differently for Intel vs AMD CPUs?

I am currently configuring a PC to use for deep learning and computer vision; it has a single Nvidia rtx 3090 where I will be doing all my training. I mostly use PyTorch, so I wanted to know if PyTorch (or any of the libraries commonly used with it for computer vision) would run better or is specifically optimized for either Intel or AMD CPUs.

Right now I am considering AMD Ryzen 9 5900X.

AFAIK PyTorch itself tries to strike a balance for performance rather than optimizing for any CPU in particular. I think it depends on what computation you expect to do on CPU and the relative performance difference due to Intel vs. AMD differences (e.g., MKLDNN has AVX-512 support and AVX-512 support was recently landed in upstream: Add AVX512 support in ATen & remove AVX support by imaginary-person · Pull Request #56992 · pytorch/pytorch (github.com), is maximum SIMD width or higher core count more important for your workload?).