I have just discovered the Kornia library which is apparently a differentiable computer vision library for PyTorch. Since the computer vision operations in kornia are differentiable, Can we use Kornia to learn data augmentation during the training of a neural network or it’s just meant such that these operations can run on a GPU.
Example: If I use kornia’s augmentation module with randomly initialized parameters. Can I train the network in such a way where these parameters can be updated during backprop?
This is a differentiable module which learns a geometrical transformation that will make the network more robust, if needed, to the encountered transformations (rotation, affine …).