Help with the implement own autograd.Function

I have a issue with implement my own autograd.Function

Consider the following example:

class Exp(torch.autograd.Function):

    @staticmethod
    def forward(ctx, i):
        result = i.exp()
        ctx.save_for_backward(result)
        print(result) # print1 here for debug
        return result

    @staticmethod
    def backward(ctx, grad_output):
        result, = ctx.saved_tensors
        return grad_output * result

Now; I have found the following behaviours

from torch.autograd import Variable
x = Variable(torch.Tensor([3.]),requires_grad=True)
e.apply(x)

tensor([20.0855]) # here comes from the print1
Out[18]:
tensor([20.0855], grad_fn=<ExpBackward>) # here is the return value

What I have found is that in forward() we seems loosing the track of gradient information. The print shows that result is not grad_fn. However, the actually returned value is.

Can anyone help me to reasoning about it ?

In my true implementation, due to the above issue, I seems to loss some variable (that is during the backward(), some variable has zero gradient and hence no update )