RuntimeError: expected scalar type Long but found Float for modulo function

I’m trying to create a model that will learn the function x % 500 from the numbers between 0 - 2000.
Regardless of my approach, which I am planning to experiment with, I am running into a type error that I’ve been stuck on for a while. No combination of to(torch.Long) or type(torch.LongTensor) seems to fix the issue. Any insight is appreciated.

X = np.arange(0, 2000)
y = X%500

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=42)

X_train = np.array([x // 10 ** np.arange(4) % 10 for x in X_train])
y_train = np.array([y // 10 ** np.arange(4) % 10 for y in y_train])

class Net(nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        self.fc1 = nn.Linear(4, 40)
        self.fc2 = nn.Linear(40, 40)
        self.fc3 = nn.Linear(40, 40)
        self.fc4 = nn.Linear(40, 40)
        self.fc5 = nn.Linear(40, 40)
        self.relu = nn.ReLU()    
        
    def forward(self,x):
        out = self.relu(self.fc1(x))
        out = self.relu(self.fc2(x))
        out = self.relu(self.fc3(x))
        out = self.relu(self.fc4(x))
        out = self.relu(self.fc5(x))
        out = nn.Softmax(out)
        return out

#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")

lr = 0.001 # learning_rate
epochs = 10 # How much to train a model

model = Net().to(device)
model.train()

X_train_tensor = torch.from_numpy(X_train).type(torch.LongTensor)
sample_out = model(X_train_tensor[0])

It seems that the last line is causing the following error:
RuntimeError: expected scalar type Long but found Float

The error message points to the model parameters and expects them to be LongTensors, since your input is already a LongTensor. Transform the inputs toFloatTensors and it should work:

X_train_tensor = torch.from_numpy(X_train).float()