RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor f or argument #2 'mat2'

OS: Mac OSX 10.13.6 High Sierra
Pytorch Version: 0.4.1

I am trying to just feed a long tensor, with shape (64, 32, 24) into an nn.Linear layer, which has been initialized to take a size 24 tensor and output a size 18 tensor.

Using pdb, I have confirmed that the tensor that I give to this linear layer is indeed a LongTensor. (output by tensor.type())

Yet, when I feed this tensor into the layer, it errors out and gives me the error message I’ve written in the title here.

"RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 ‘mat2’

Why would that be happening? Thanks in advance!

My model class is posted below.

In the forward function I hand a tuple of data, which includes a batch of either note pitch information or note duration information, and a batch of chords.

I separate that tuple into individual variables, and then feed them into separate networks. I first feed the chord batch into a linear layer, which is throwing the error.

The linear layers arguments are const.CHORD_DIM, which is 24, and const.CHORD_EMBED_DIM which is 18.

I have verified using pdb that the chord tensor I am passing to it is indeed a LongTensor, and it’s size is (64, 32, 24). That is, it is a batch of 64 length 32 sequences of chord representations, each of which is length 24.

class ChordCondLSTM(nn.Module):
    def __init__(self, vocab_size=None, embed_dim=None, hidden_dim=None, output_dim=None, seq_len=None, 
            batch_size=None, dropout=0.5, batch_norm=True, no_cuda=False, **kwargs):
        super().__init__(**kwargs)
        self.hidden_dim = hidden_dim
        self.num_layers = const.NUM_RNN_LAYERS
        self.batch_norm = batch_norm
        self.no_cuda = no_cuda

        self.chord_fc1 = nn.Linear(const.CHORD_DIM, const.CHORD_EMBED_DIM)
        self.chord_bn = nn.BatchNorm1d(seq_len)
        self.chord_fc2 = nn.Linear(const.CHORD_EMBED_DIM, const.CHORD_EMBED_DIM)

        self.embedding = nn.Embedding(vocab_size, embed_dim)
        self.encoder = nn.Linear(embed_dim + const.CHORD_EMBED_DIM, hidden_dim)
        self.encode_bn = nn.BatchNorm1d(seq_len)
        self.lstm = nn.LSTM(hidden_dim, hidden_dim, num_layers=self.num_layers, 
            batch_first=True, dropout=dropout)
        mid_dim = (hidden_dim + output_dim) // 2
        self.decode1 = nn.Linear(hidden_dim, mid_dim)
        self.decode_bn = nn.BatchNorm1d(seq_len)
        self.decode2 = nn.Linear(mid_dim, output_dim)
        self.softmax = nn.LogSoftmax(dim=2)

        self.hidden_and_cell = None
        if batch_size is not None:
            self.init_hidden_and_cell(batch_size)

        if torch.cuda.is_available() and (not self.no_cuda):
            self.cuda()
        return 

    def init_hidden_and_cell(self, batch_size):
        hidden = Variable(torch.zeros(self.num_layers, batch_size, self.hidden_dim))
        cell = Variable(torch.zeros(self.num_layers, batch_size, self.hidden_dim))
        if torch.cuda.is_available() and (not self.no_cuda):
            hidden = hidden.cuda()
            cell = cell.cuda()
        self.hidden_and_cell = (hidden, cell)

    def repackage_hidden_and_cell(self):
        new_hidden = Variable(self.hidden_and_cell[0].data)
        new_cell = Variable(self.hidden_and_cell[1].data)
        if torch.cuda.is_available() and (not self.no_cuda):
            new_hidden = new_hidden.cuda()
            new_cell = new_cell.cuda()
        self.hidden_and_cell = (new_hidden, new_cell)

    def forward(self, data):
        x, chords = data
        import pdb
        pdb.set_trace()
        chord_embeds = self.chord_fc1(chords)
        if self.batch_norm:
            chord_embeds = self.chord_bn(chord_embeds)
        chord_embeds = self.chord_fc2(F.relu(chord_embeds))

        x_embeds = self.embedding(x)
        encoding = self.encoder(torch.cat([chord_embeds, x_embeds], 2)) # Concatenate along 3rd dimension
        if self.batch_norm:
            encoding = self.encode_bn(encoding)

        lstm_out, self.hidden_and_cell = self.lstm(encoding, self.hidden_and_cell)
        decoding = self.decode1(lstm_out)
        if self.batch_norm:
            decoding = self.decode_bn(decoding)
        decoding = self.decode2(decoding)

        output = self.softmax(decoding)
        return output

can you paste your exact code here?

pasted! … this post must be 20 characters at least …

The problem is that nn.Linear in general doesn’t take LongTensor, it only takes FloatTensor, because it’s internal weight matrix, ‘mat2’ here, is a FloatTensor.

It would be helpful if this type of thing were caught at a higher level so that the error message was clear and helpful instead of allowing an error from a lower level function to propagate up.

1 Like