Why would we need to differentiate through data augmentations?

The Kornia library (GitHub - kornia/kornia: Open Source Differentiable Computer Vision Library) allows us (amongst other features) to differentiate/backprop through data augmentations.

I’ve been trying to figure out why someone would find this useful.
Generally backprop will stop at the 1st layer of the neural net, since there’s no reason to update the input data itself.

So why would someone want to use Kornia augmentations (besides speed)?