This code doesnt run in Pytorch 1.1.0! I keep getting the backward() needs to return two values not 1!
Edit :
You need to return None for any arguments that you do not need the gradients. so the L1Penalty would be :
import torch
from torch.autograd import Function
class L1Penalty(Function):
@staticmethod
def forward(ctx, input, l1weight):
ctx.save_for_backward(input)
ctx.l1weight = l1weight
return input
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_variables
grad_input = input.clone().sign().mul(self.l1weight)
grad_input += grad_output
return grad_input, None