Difference between AutogradNonFunctional and CompositeExplicitAutograd?

what is the difference between CompositeExplicitAutogradNonFunctional and CompositeExplicitAutograd?

I have read the documentation pytorch/aten/src/ATen/native at main · pytorch/pytorch · GitHub. And it looks like the biggest difference is whether it calls internally into an aliasing operator.

How does pytorch know that the operator calls an aliasing operator internally and register the op(like mul.Tensor) to dispatch key CompositeExplicitAutogradNonFunctional? At the same time, why the index_put operator is registered to CompositeExplicitAutograd?