Error: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2'

class FC(nn.Module):
    def __init__(self, opt):
        super(FC, self).__init__()
        self.encoder = nn.Embedding(opt.VOCAB_SIZE, opt.EMBEDDING_DIM)
        self.gru = nn.Sequential(
            nn.GRU(input_size=100,hidden_size=opt.LINER_HID_SIZE),
            nn.ReLU(False),
        )
        self.fc_1 = nn.Sequential(
            nn.Linear(10000, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_1*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_1*10, opt.NUM_CLASS_1),
            nn.Dropout(0.5),
        )
        self.fc_2 = nn.Sequential(
            nn.Linear(10000, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_2*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_2*10, opt.NUM_CLASS_2),
            nn.Dropout(0.5),
        )
        self.fc_3 = nn.Sequential(
            nn.Linear(10000, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_3*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_3*10, opt.NUM_CLASS_3),
            nn.Dropout(0.5),
        )

    def forward(self, x):
        x = x.long()
        outputs = self.encoder(x)
        outputs = outputs.long()
        outputs = self.gru(outputs)
        outputs = outputs.view(outputs.size()[0], -1)
        output_1 = self.fc_1(outputs)
        output_2 = self.fc_2(outputs)
        output_3 = self.fc_3(outputs)
        return (output_1, output_2, output_3)

I consider that I have just transformed the ‘output’ to LongTensor but there is also an error 'expected object of scalar type Long but got scalar type Float for argument #2 ‘mat2’ ', I am confused and could not solve the problem, please help me!

Hi, can you play add stacktrace error here? Which line is giving you this?


It is the traceback. Thank you very much!

If I understand properly, the error is when you call self.gru.
Based on the documentation https://pytorch.org/docs/stable/nn.html#torch.nn.GRU, you have to pass float numbers to GRU object, so you have to remove .long() before passing outputs to self.gru.
By the way, can you test your model by passing hidden_intitial_state like below example in the documentation:

rnn = nn.GRU(10, 20, 2)
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
output, hn = rnn(input, h0)

In the end, I tested a simple model and for sure you have to pass float numbers to GRU.
Please let me know the result.

You advice helps a lot. I tested the example in my program and the example ran normally.Then I changed the input into the ‘outputs’ above. The error ‘expected float but long given’ appeared. I removed the ‘outputs = outputs.long()’ and the program ran normally.
This is the code after modifying.

import sys
import os
sys.path.append(os.getcwd())
import torch
import torch.nn as nn
from config import Config


class FC(nn.Module):
    def __init__(self, opt):
        super(FC, self).__init__()
        self.encoder = nn.Embedding(opt.VOCAB_SIZE, opt.EMBEDDING_DIM)
        self.rnn = nn.GRU(100,128,2)
        self.relu = nn.ReLU(False)
        self.fc_1 = nn.Sequential(
            nn.Linear(12800, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_1*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_1*10, opt.NUM_CLASS_1),
            nn.Dropout(0.5),
        )
        self.fc_2 = nn.Sequential(
            nn.Linear(12800, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_2*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_2*10, opt.NUM_CLASS_2),
            nn.Dropout(0.5),
        )
        self.fc_3 = nn.Sequential(
            nn.Linear(12800, opt.LINER_HID_SIZE),
            #nn.BatchNorm1d(opt.LINER_HID_SIZE),
            nn.ReLU(False),
            nn.Linear(opt.LINER_HID_SIZE, opt.NUM_CLASS_3*10),
            nn.ReLU(False),
            nn.Linear(opt.NUM_CLASS_3*10, opt.NUM_CLASS_3),
            nn.Dropout(0.5),
        )

    def forward(self, x):
        x = x.long()
        outputs = self.encoder(x)
        h0 = torch.randn(2,100,128).cuda()
        outputs,ht = self.rnn(outputs, h0)
        outputs = self.relu(outputs)
        outputs = outputs.view(outputs.size()[0], -1)
        output_1 = self.fc_1(outputs)
        output_2 = self.fc_2(outputs)
        output_3 = self.fc_3(outputs)
        return (output_1, output_2, output_3)


Thanks very much again!

You’re welcome mate.
But you should be careful about two things:

  1. If you want to use your code on GPU, you just need to create an instance of your model like this:
model = FC()
model.cuda()

I mean, you do not need to call cuda for h0 in your forward method.

  1. You should not pass random values as h0 to your GRU object. If you do not want to initialize it (as before), pass zeros like this:
h0 = torch.zeros(2, 100, 128)

Good luck

Oh! These are undoubtedly useful ideas. I’ve ignored the settings and I will correct my program.