How to add additional information for backward?


#1

Hi, I’m new learner of pytorch, I tried to add very simple information for backward() from layer to layer besides output_grad. However, it wouldn’t work even for the following extremely simple code. Can someone help me on how to fix this ? Thanks a lot!

import torch
import torch.autograd as autograd
import torch.nn as nn

class MyFun(torch.autograd.Function):
    def forward(self, inp):
        return inp

    def backward(self, grad_out, P):
        grad_input = grad_out.clone()
        print('Custom backward called!')
        return grad_input, P-1

class MyMod(nn.Module):
    def forward(self, x):
         return MyFun()(x)

mod1 = MyMod()

y = autograd.Variable(torch.randn(1), requires_grad=True)
z = mod1(y)
print(z.type)
P = autograd.Variable(torch.ones((1,1)))
#z.backward(P.data)
#z.backward(P)
z.backward()

TypeError: backward() missing 1 required positional argument: ‘P’


(colesbury) #2
  1. Your backward won’t be called because you return the original input ‘inp’. (This is changed in 0.3, but until then return inp.clone())
  2. Your backward takes in too many arguments. It should only take in grad_out because the forward only returns a single argument.
  3. Your backward returns too many values. It should only return a single value because the forward only takes in a single value.

#3

Thanks a lot ! If I just want to backprop some P which doesn’t need to do anything for forward(). Do you have any suggestions on where I should add ? Is it better to just define another function in the nn.Module ? Thanks again for your help.