Error with CUDAFloat and CPUFloat

Here I meet an error like this :
Expected object of type Variable[CPUFloatType] but found type Variable[CUDAFloatType]
When I use this code:

I don’t think we can see your code :slight_smile:

Thanks for reply!
I have solved the bug myself(when edit the question),and thanks a lot,again!

I will close this question

Actually, I think the error warning has some problems.

My error warning is “Require CPUfloat tensor but meet CudaFloat tensor”,but actually my error appears when I forget to .cuda()one input in the net. And this error warning misleads me for a while.

Well, I also find that torch.sum()function 's prompt message has a little problem, because the input arguments don’t has dim prompt,which contradicts to the document.

The warning is weird but it kinda makes sense. It is saying : to do operation with input Tensor (CPU), it needs to see a net weight tensor on CPU, but it sees one on GPU. That said, I agree that it is confusing.

I don’t quite understand your question about sum. Could you elaborate?

Well, about the sum function. In document ,it says that torch.sum function can sum in different axis, such as


then we get a tensor b with dimension 4. But in the file of torch.sum, the definition is def sum(input): which losses some arguments that don’t appear in the prompt message.

It doesn’t affect the use at all and may be a very small problem.

What prompt message are you talking about? I’m still very confused. Also, I don’t think sum is defined in an

Well, maybe it is the problem of my IDE (Pycharm)?
I know that torch.sum is written in C++,and the only record the form function.

I use the Go to declaration tools in the IDE, and it goes to an file, and the illustrate of torch.sum is like this:

def sum(input): # real signature unknown; restored from __doc__
    .. function:: sum(input) -> float
    Returns the sum of all elements in the :attr:`input` Tensor.
        input (Tensor): the input `Tensor`
        >>> a = torch.randn(1, 3)
        >>> a
         0.6170  0.3546  0.0253
        [torch.FloatTensor of size 1x3]
        >>> torch.sum(a)
    .. function:: sum(input, dim, keepdim=False, out=None) -> Tensor
    Returns the sum of each row of the :attr:`input` Tensor in the given
    dimension :attr:`dim`.
    If :attr:`keepdim` is ``True``, the output Tensor is of the same size
    as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
    Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
    the output Tensor having 1 fewer dimension than :attr:`input`.
        input (Tensor): the input `Tensor`
        dim (int): the dimension to reduce
        keepdim (bool): whether the output Tensor has :attr:`dim` retained or not
        out (Tensor, optional): the result Tensor
        >>> a = torch.randn(4, 4)
        >>> a
        -0.4640  0.0609  0.1122  0.4784
        -1.3063  1.6443  0.4714 -0.7396
        -1.3561 -0.1959  1.0609 -1.9855
         2.6833  0.5746 -0.5709 -0.4430
        [torch.FloatTensor of size 4x4]
        >>> torch.sum(a, 1)
        [torch.FloatTensor of size 4]
    return 0.0

What I want to say is the input part, I only see an arument input, and my IDE’s argument prompt message will only give the hint of input, and will drop an warning if I add the dim argument.

Do I illustrate the little problem clearly this time? Thanks a lot!

I see. It’s the docstring. Actually, if you scroll down, the second definition in docstring shows .. function:: sum(input, dim, keepdim=False, out=None) -> Tensor