Hello everyone, I hope you are having a great time.
I recently wanted to create a simple autoencoder and for that used this thread where @smth provided an example on how to create an autograd Function for the aformentioned autoencoder.
and the code he wrote is this :
import torch from torch.autograd import Function class L1Penalty(Function): @staticmethod def forward(ctx, input, l1weight): ctx.save_for_backward(input) ctx.l1weight = l1weight return input @staticmethod def backward(ctx, grad_output): input, = ctx.saved_variables grad_input = input.clone().sign().mul(self.l1weight) grad_input += grad_output return grad_input
However, this code fails completely on newer versions of Pytorch (e.g 1.1.0) with the error indicating the backward method needs to return as many values as the forward method received.
I asked but got no answer and I myself also couldnt specify I dont need a gradient for the second argument. I tried to set
ctx.needs_input_grad but thats read-only .
What should I do here? should I simply return None, or 0 for the arguments that I’m not interested in?
by the way what does this part exactly doing?
grad_input = input.clone().sign().mul(self.l1weight)
Can anyone please also clarify this? Thanks a lot