Expected object of type Variable[torch.LongTensor] but found type

When I train my rnn model ,I use NLL loss function(Actually I have tried other loss function and got the same error). But I got a RuntimeError as following:

Traceback (most recent call last):
  File "/home/leon/Downloads/pycharm/helpers/pydev/pydev_run_in_console.py", line 52, in run_file
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/home/leon/Downloads/pycharm/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/leon/pyworkspace/untitled/rnnlm2.py", line 93, in <module>
    loss = criterion(out,y)
  File "/home/leon/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/leon/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 601, in forward
    self.ignore_index, self.reduce)
  File "/home/leon/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1140, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
  File "/home/leon/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1049, in nll_loss
    return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
RuntimeError: Expected object of type Variable[torch.LongTensor] but found type Variable[torch.FloatTensor] for argument #1 'target'
PyDev console: using IPython 6.1.0

And my code is very simple like:

import torch
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
from numpy import *
from torch.utils.data import DataLoader
from mydataset import MyDataset

BATCH_SIZE = 5
sentence_set = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
EMBDDING_DIM = len(sentence_set)+1
HIDDEN_UNITS = 200
word_to_ix = {}
for word in sentence_set:
    if word not in word_to_ix:
        word_to_ix[word] = len(word_to_ix)
print(word_to_ix)


def make_word_to_ix(word,word_to_ix):
    vec = torch.zeros(EMBDDING_DIM)
    if word in word_to_ix:
        vec[word_to_ix[word]] = 1
    else:
        vec[len(word_to_ix)] = 1
    return vec


data_words = []
data_labels = []
for i in range(len(sentence_set) -2):
    word = sentence_set[i]
    label = sentence_set[i+1]
    data_words.append(make_word_to_ix(word,word_to_ix))
    data_labels.append(make_word_to_ix(label,word_to_ix))

dataset = MyDataset(data_words,data_labels)
train_loader = DataLoader(dataset,batch_size=BATCH_SIZE)

'''
for _,batch in enumerate(train_loader):
    print("word_batch------------>\n")
    print(batch[0])
    print("label batch----------->\n")
    print(batch[1])
'''

class RNNModel(nn.Module):
    def __init__(self, embdding_size, hidden_size):
        super(RNNModel, self).__init__()
        self.rnn = nn.RNN(embdding_size, hidden_size,num_layers=2,nonlinearity='relu')
        self.linear = nn.Linear(hidden_size, embdding_size)

    def forward(self, x, hidden):
        input = x.view(BATCH_SIZE, -1)
        output1, h_n = self.rnn(input, hidden)
        output2 = self.linear(output1)
        log_prob = F.log_softmax(output2)
        return log_prob, h_n


rnnmodel = RNNModel(EMBDDING_DIM, HIDDEN_UNITS)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(rnnmodel.parameters(), lr=1e-3)

#testing
#input_hidden = torch.autograd.Variable(torch.randn(BATCH_SIZE, HIDDEN_UNITS))
#x = torch.autograd.Variable(torch.rand(BATCH_SIZE,EMBDDING_DIM))
#y,_ = rnnmodel(x,input_hidden)
#print(y)

#''''
for epoch in range(10):
    print('epoch: {}'.format(epoch + 1))
    print('*' * 10)
    running_loss = 0
    input_hidden = torch.autograd.Variable(torch.randn(BATCH_SIZE, HIDDEN_UNITS))
    for _,batch in enumerate(train_loader):
        x = torch.autograd.Variable(batch[0])
        y = torch.autograd.Variable(batch[1])
        # forward
        out, input_hidden = rnnmodel(x, input_hidden)
        loss = criterion(out,y)
        running_loss += loss.data[0]
        # backward
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()
    print('Loss: {:.6f}'.format(running_loss / len(word_to_ix)))
#'''

I think the type that I use in NLL is wrong or the way I call loss fuction is wrong. But the variable y is truely type of torch.FloatTensor.Why should I use LongTensor?
The testing code(commented) is ok. So I think the model got wrong when I use loss function or backward()
I search the Internet and the forums with no answer.
Thanks for your answer,sincerely.

4 Likes

But When I change the loss function to L1Loss,it works! Others like NLLLoss and CrossEntropyLoss get Runtime Error. But I still don’t know why.Hope to receive some explanation.

1 Like

NLLLoss’s target should be a torch.LongTensor. See here for more details: http://pytorch.org/docs/master/nn.html?highlight=nllloss#torch.nn.NLLLoss

This means you should change your code to:
y = torch.autograd.Variable(batch[1]).long(). However, I’m not 100% sure you have the right target shape (see the docs).

8 Likes

Thank you!! I solve the problem.

How did you solve this problem? I have the same problem, but can’t find the answer

I don’t know what is your problem. But when I found that I use the loss function incorrectly.
Check the parameters that your function required.

Thanks for your reply! Ya, there was a mistake i had done with LongTensor and FloatTensor.

I had K-class classification problem.

I was getting this error when I was using CrossEntropyLoss:

RuntimeError: Expected object of type Variable[torch.LongTensor] but found type
Variable[torch.DoubleTensor] for argument #1 'target’
Here is how I fixed it:

I first defined:

target = Variable(torch.from_numpy(t_train[:batch_size]).long(), requires_grad = False)

loss = torch.nn.CrossEntropyLoss()
Then, in the training step, I defined:

y_pred = mlp_instance(x_data)
l = loss(y_pred, target)

where y_pred is a batch_size by K tensor but target is a tensor (simply a vector) of size batch_size; note that target isn’t one-hot tensor with shape: batch_size by K.

1 Like

target = target.float()

This solved my problem.
Inspired by:
https://discuss.pytorch.org/t/runtimeerror-expected-object-of-type-torch-longtensor-but-found-type-torch-floattensor/23289/5

I used it in the training loop like this:
inputs, labels = data
inputs = inputs.float()
labels = labels.float()

(You should try to fix the origin of the problem! I just do this because I first want it running before optimizing, so I will go back to fix this later to improve speed)

Actually, i have the same problem as yours and i still can not find the right way to solve it from the answers. You said you have solved the problem, could you tell me the details of your solution?