Why gradients of `eigvalsh` always numerically stable?

Hi,

The doc (v1.10.0) says the gradient of the function eigvalsh is always numerically stable – I’m wondering why it’s the case?

I saw eigvalsh internally invokes eigh of which the backward pass is problematic when there’re repeated eigenvalues. So why the problem just goes away when we only want the derivatives of eigenvalues?

In other words, why only differentiating eigenvalues is ok, both theory-wise and implementation-wise?

Can someone enlighten me?