Performing custom transformations on model output before computing loss

I am working with an autoencoder model which takes an input image and reconstructs it for the output. I am using MSELoss here. I want to perform certain transformations on the model output (matrix multiplication with some other matrix, addition, then matrix multiplication again) before computing the loss on it. I tried implementing this but got the error:

one of the variables needed for gradient computation has been modified by an inplace operation.

Is there a workaround to perform these operations on the output?

Based on the error message it seems you are manipulating a tensor inplace.
Could you check, if you are using inplace operations (e.g. via a += b) and remove them?

Thank you! I tried removing them and made the operations store their result in a different variable. Now, when I pass the new variable to the loss and backpropagate, there is no gradient being passed through the network. The code looks something like this:

        output = model(input)

        results = torch.zeros((batch_size, 16384))


        # Applying the custom transformation (each sample in the batch
        # undergoes the same operation but with different parameters)

        for i in range (0, batch_size):
            results[i] = measurement(output[i], i)


        results = Variable(results, requires_grad = True)

        loss = criterion(results, target)
        

        # ===================backward====================
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

Is there a reason why the flow of gradients might be getting interrupted?

Rewrapping a tensor as seen here:

results = Variable(results, requires_grad = True)

will detach it from the computation graph, so remove this line of code.
Also, Variables are deprecated since PyTorch 0.4.0 so you shouldn’t use them anymore in any case.