Can we use the library kornia to learn data augmentations?

Hi,

I have just discovered the Kornia library which is apparently a differentiable computer vision library for PyTorch. Since the computer vision operations in kornia are differentiable, Can we use Kornia to learn data augmentation during the training of a neural network or it’s just meant such that these operations can run on a GPU.

Example: If I use kornia’s augmentation module with randomly initialized parameters. Can I train the network in such a way where these parameters can be updated during backprop?

1 Like

Hi, did you come across any resource for this? I am also looking for something similar.

You probably can.

An alternative way would also be to use a Spatial Transformer Network (STN), which is also already implemented in Pytorch : Spatial Transformer Networks Tutorial — PyTorch Tutorials 2.0.0+cu117 documentation

This is a differentiable module which learns a geometrical transformation that will make the network more robust, if needed, to the encountered transformations (rotation, affine …).

1 Like