Implementing Your Own Loss Function, Is Backprorogation Needed when nn.Module is Used

I implemented my own loss function using nn.Module. Here’s a simplified example:

import torch
import torch.nn as nn
from torch.autograd import Variable

dtype = torch.FloatTensor

class MyLoss(nn.Module):
    def __init__(self, dim, noise):
        super(MyLoss, self).__init__()
        self.W = Variable(torch.randn(dim, 1).type(dtype), requires_grad=True)
        self.sigmoid = nn.Sigmoid()

    def forward(self, y_hat, y):
        h = y_hat-y
        g = self.sigmoid(self.W)
        m = torch.mul(h,g)
        return torch.norm(m) # this should go to 0

for each iteration I use:

        optimizer.zero_grad()
        x = Variable(torch.FloatTensor(x),requires_grad=False)
        y = Variable(torch.FloatTensor(y),requires_grad=False)
        # Forward + Backward + Optimize
        y_hat = model(x)
        loss = criterion(y_hat, y)
        loss.backward() # prams of MyLoss are not updated
        optimizer.step()

Currently W, the variable that requires a grad in MyLoss is no updating. I guess I need to implement the backward function as well in MyLoss. Is that correct? If so how can I use pytorch backward functions and not calculate the gradients myself?

2 Likes

To add learnable parameters to an nn.Module, you should use nn.Parameter type:
self.W = nn.Parameter(torch.randn(dim, 1).type(dtype))

Then you need to make sure that this is passed to your optimizer with the other parameters of your network:
optimizer = optim.SGD(model.parameters() + criterion.parameters(), optim_args).

And no you don’t need to implement the backward pass if you do that :slight_smile:

3 Likes