Pytorch - network with non-differentiable module

Hi, I would like to know how to write in pytorch and train a neural network that has a module that is explicitly non-diferentiable, for example a module/code written in another language that I cannot re-write into python (a black box function). In that case, I imagine that the derivatives have to be calculated using finites differences. Could I define this function in a way pytorch can use it? Thanks very much!

Finite differences does not give gradients easily - you would need to have many evaluations (1 base + 1 per input size) of your black box function to compute the derivative. Then, of course, it could work.

1 Like

Hi Tom, thanks for your comment. Could you expand the answer with code? Could you show me how to define a function with those properties? Thanks a lot!

Hi Tom, thanks again for your comment. Maybe redefining the autograd function of the black box like this: PyTorch: Defining New autograd Functions — PyTorch Tutorials 1.7.1 documentation ?

Thanks!