Torch.autograd Functions automatically convert Variable to Tensor during forward pass

I am implementing my own loss function following http://pytorch.org/docs/notes/extending.html#extending-torch-autograd.

However, after my implementation of forward(self, input), during forward pass, the input, which is a Variable, is converted into a floatTensor. Since my function is a loss function, the output becomes just a float number. Pytorch throw out an error:
RuntimeError: data must be a Tensor

How can I fix this problem?
Thanks

The Variables are unwrapped for you while implementing an autograd.Function's forward/backward.

2 Likes

Hi @Paralysis, I also encountered a similar problem where the output of my custom Function is a float number. How did you solve your problem?

@smth @Paralysis I have codes that look like:

    @staticmethod
    def forward(ctx, x):
        output = torch.potrf(x).diag().prod()**2
        ctx.save_for_backward(x, output)
        return output 

for which I want to compute the determinant of an input matrix. I got the same fail message RuntimeError: data must be a Tensor which I think comes from the output that turned out to be a float number. Any suggestions?

-------------------------- update ------------------------------------
My problem has been solved by changing the type of output to torch tensor:

    @staticmethod
    def forward(ctx, x):
        output = torch.potrf(x).diag().prod()**2
        output = torch.Tensor([output]).cuda() # NEW line
        ctx.save_for_backward(x, output)
        return output

What I did at last was to stick on build-in torch functions. I never tried to write functions since then…