I have stumbled on a runtime error during the backward pass. Pytorch has no problem doing double backward for softmax, but it generates errors for softplus. Would you please also add double backward support for softplus?
Thanks a lot!
import torch.nn as nn
from torch.autograd import Variable
x = Variable(torch.rand(1, 10), requires_grad=True)
w = Variable(torch.rand(10, 10))
y = torch.mm(x, w)
g = nn.Softplus()
l = torch.sum(g(y))
h = torch.autograd.grad(l, x, create_graph=True)
L = torch.sum(h)