I have a function which takes a numpy image array and performs some operations and returns a new image. During my forward pass, I want to use that function on the batch of tensors. What is the best way to achieve this?
The very moment you convert a torch tensor to a numpy array, the autograd of pytorch loses track on the contents of the tensor. Unless you can write an analytical or numerical gradient calculation for the computation happening in numpy, this kind of interference will break the computation graph and its not possible to optimize the network.
Does that means I will have to perform the image processing functions on the tensors itself?
If it is possible to do the image processing operations in tensor, that would be ideal. If not, you just have to know and write your own backward function to calculate gradients based on the operations you perform.
Refer about how to extend pytorch here: https://pytorch.org/docs/stable/notes/extending.html