Using skimage transform inside pytorch model

Hello,
I was wondering if there is a way to use skimage transforms as part of cnn in pytorch?
The model should have two branches, one is a simple convolutional neural network, and the other should perform transform, then the result should go through some convolutional layers, then follows the inverse transform, and the output should be concatenated with output of the first branch, and then this concatenated tensor should pass a couple more convolutional layers.
Since the transform performs only on numpy arrays, prior to the call I should move the tensor to cpu and convert it to numpy array, right? But I’m not sure how to deal with batch of images, since the network processes whole batch at a time, not just one sample, like the transform works with one sample at a time. The first thing that comes to my mind is to use for loop across all the samples in a batch, could it work this way? Also, should I use no_grad() when computing transform? Will this cut the graph or affect training some way?
Thanks in advance!

Yes, but this would also detach the array from the computation graph and Autograd won’t be able to track these operations (in case you want to use them as differentiable methods). You could either implement the backward pass manually using custom autograd.Functions or check out kornia, which provides differentiable transformations (CC @edgarriba as the core dev).

Yes, a loop would work, but could be slow. Since you are already moving the data to the CPU (and thus synchronizing the code), the performance penalty from the loop might not be the bottleneck.

Yes, it will detach these operations from the computation graph as explained before.

1 Like

KK, thank you very much! :slight_smile: