Auto Grad for External Tensor?

Hi there PyTorch beginner here.

So I’m trying to use make an autoencoder in PyTorch in combination of with an external custom graphics library. I know that generally, the loss function of the antoencoder takes two tensors as inputs, an input image tensor A, and its generated counterpart image tensor B.

But what I’m trying to do is slightly different. Instead of using the generated image tensor B directly in the loss function, I need to do some external graphic processing on it, turn the resulting array of numbers into another tensor C (which has the same shape as B), and then calculate the loss between A and C. But since C is processed externally, it’s connection to the recorded graph is broken. I have very limited knowledge about autograd (and PyTorch in general), so I’m a bit lost about what can be done properly to achieve this.

I’ve made a few attempts:

  1. Plug C directly in the loss function. It didn’t work (which is expected). I got the following error:
    “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”. My understanding is that since C is a new tensor without any prior operation history, autograd doesn’t know what to do backward for it.

  2. I also tried to “fake” the history for C by doing something like C = B.mul(0.001).add(C), in an attempt to “borrow” the autograd history from B while keeping the influence of B as minimum as I could. This time the loss function didn’t complain, but the loss seemed to plateau after a while, and the model didn’t converge. I tried other remixing formulas, like C = B.mul(0.003).add(C.mul(3)), etc., but none of them worked.

  3. And if I plug B into the loss function, and model converged with no issue. This is just to double check the model & hyperparameters are correct. And I can’t re-implement the processes that turns B to be C using PyTorch because B has to go through some simulation in the external graphics library to become C.

My questions are:

  1. Is what I was doing in attempt #2 the right direction to do it? Meaning it’s just the number in the formula are incorrect? Or a more complicated / different formula is needed?
  2. If not, what method should be used for this kind of situation?
  3. Or, using external tensor in autograd is not possible at all?

Thanks for your time reading this. Any help is greatly appreciated!

To use operations from other libraries, you could write a custom autograd.Function and implement the forward as well as backward methods manually as described here.