How do I run nn.MSELoss() (or something similar) backwards?

In my forward function, I run nn.MSELoss() forwards, but I need to run it (or something similar) backwards, inside my backwards function.

For example:

class InputLoss(nn.Module):

    def __init__(self, strength, normalize):
    self.loss = self.strength
    self.target = torch.Tensor()
    self.crit = nn.MSELoss()

    def forward(self, input):
         self.loss = self.crit(input, self.target) * strength # Forward Crit

    def backward(self, input, gradOutput):
         self.gradInput = self.crit.backward(input, self.target) * strength # Backward Crit

How would I got about doing this?

you don’t need to define backward for a module

I think he just want to build a own backward module to study.

@SimonW @dohwan.lee I’m trying to recreate a Lua project in PyTorch, where things like gradient normalization and dividing by n twice in , are only done in the backwards pass.

Backward hooks are likely more suited for your case.

@SimonW How would I replicate a backwards pass of nn.MSELoss() with a backwards hook? Could provide an example?