How to backward correctly in original layer

I wrote bellowing code to use original layer.

class binary_activate(Function):
    def forward(self,x):
    positives =, 0)
    negatives = torch.le(x, 0)
    le0xmin1 = torch.mul(negatives.float(), -1)
    binary_output = positives.float() + le0xmin1.float()
    return binary_output

    def backward(self, grad_output):
        result = self.saved_tensors[0]
        grad_input = F.hardtanh(Variable(result)).data * grad_output
        return grad_input

class Model1(nn.Module):
    def __init__(self):
        super(Model1, self).__init__()
        self.fc1 = nn.Linear(INPUT_SIZE,HIDDEN1)
        self.fc2 = nn.Linear(HIDDEN1, HIDDEN2)
        self.fc3 = nn.Linear(HIDDEN2, HIDDEN3)
        self.fc4 = nn.Linear(HIDDEN3, OUTPUT_SIZE)
        self.binary = binary_activate()

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = self.fc4(x)
        x = self.binary(x)
        return x

model = Model1()
optimizer = torch.optim.Adam(model.parameters(),lr=0.0002)
criterion = nn.HingeEmbeddingLoss().cuda()
for i in range(EPOCH):
        x = Variable(torch.FloatTensor(torch.randn(batch_size,INPUT_SIZE)).cuda())
        logits = model(x)
        loss = criterion(logits, x)

Then, I got error which said

result = self.saved_tensors[0]
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time.

Though Error message specifies the problem somehow, I couldn’t figure out the way to solve it.
How can I backward correctly?


By the way, what I want to do is
・Forward ・・・ To out put binary value of [-1,1]. When the input was x, output should be 1sign(x).
・Backward・・・ To return Cost
hardtanh(x). x has values same as the input for Forward.

I would really appreciate for your help.

It seems to me that you use the self.binary twice in the forward pass. (Although I don’t see that in your code)
To avoid all the possible conflicts, I would suggest not to save the Function instance as a class attribute(especially if the Function has buffer).

Another suggestion is to replace your binary_activate with other simple Function(but also has save_for_backward), and see if it’s the problem of your implementation of the binary_activate, or because of the way you use the Function.

Thank you for your comment.
I tried to replace the binary_activate as follows, but I still get same error.
So, It seems to be the way of using Function.

class binary_activate(Function):
    def forward(self,x):
        return x
    def backward(self, grad_output):
        result = self.saved_tensors[0]
        grad_input = result
        return grad_input

"I would suggest not to save the Function instance as a class attribute(especially if the Function has buffer)."
Excuse me, but I don’t get this meaning. I also tried to declare and call the binary_activate class out of the Model class, though, still have the same error.
Is this the way you mentioned? If not, could you tell me more detail?


The lines self.binary = binary_activate, and then the usage later, x = self.binary(x), is what is causing the problem. You are saving one instance of your binary_activate class and reusing that one instance every time you forward through Model1. Instead, create a new instance of binary_activate on each forward, i.e. get rid of self.binary and replace the line using it in forward with x = binary_activate()(x).

It worked! Thanks a lot!!