Scalar type Long; but got CUDAType instead

Hi, I am trying to learn with pytorch exercises and I’m getting the error

Expected tensor for argument #1 ‘indices’ to have scalar type Long; but got CUDAType instead (while checking arguments for embedding)

import torch.nn as nn

class RNN(nn.Module):
    
    def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
        """
        Initialize the PyTorch RNN Module
        :param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
        :param output_size: The number of output dimensions of the neural network
        :param embedding_dim: The size of embeddings, should you choose to use them        
        :param hidden_dim: The size of the hidden layer outputs
        :param dropout: dropout to add in between LSTM/GRU layers
        """
        super(RNN, self).__init__()
        # TODO: Implement function
        
        # set class variables
        self.n_layers = n_layers
        self.hidden_dim = hidden_dim
        self.output_size = output_size
        
        # define model layers
        self.embedding = nn.Embedding(vocab_size, embedding_dim)
        print(self.embedding)

        self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
        self.dropout = nn.Dropout(dropout)

        self.fc = nn.Linear(hidden_dim, output_size)
    
    
    def forward(self, nn_input, hidden):
        """
        Forward propagation of the neural network
        :param nn_input: The input to the neural network
        :param hidden: The hidden state        
        :return: Two Tensors, the output of the neural network and the latest hidden state
        """

Are your model and data both on the GPU?
Based on the error message it looks like the one of these instances is still on the CPU while the other is on the GPU.

The GPu should be enabled

print(“torch.cuda.is_available() =”, torch.cuda.is_available())
print(“torch.cuda.device_count() =”, torch.cuda.device_count())
print(“torch.cuda.device(‘cuda’) =”, torch.cuda.device(‘cuda’))
print(“torch.cuda.current_device() =”, torch.cuda.current_device())
torch.cuda.get_device_name(0)

You would still need to manually push the model and data to the GPU using:

model = MyModel()
device = 'cuda'
model = model.to(device)
data = data.to(device)
...