TypeError: relu(): argument 'input' (position 1) must be Tensor, not tuple

I am training a stacked GRU with a linear output layer. I have verified the input size is 3 dim. The code is giving me the following error: TypeError: relu(): argument ‘input’ (position 1) must be Tensor, not tuple

I have written code to convert my input numpy arrays to torch tensors but I am still getting that error.

class CustomDataset(Dataset):
def init(self, X_data, Y_data):
super(CustomDataset, self).init()
self.X_data = X_data.float()
self.Y_data = Y_data.float()

# Initializes the data and preprocessing.

def __getitem__(self, index):
    return self.X_data[:,:,:], self.Y_data[:,:,:]
# Returns data (input and output) in batches.
# There is an error in this part of the code.
def __len__ (self):
    return len(self.X_data[:,1,1])
# Returns the size of the input data.

inputdata = np.load(‘4thin.npy’)
outputdata = np.load(‘4thout.npy’)

trainX = inputdata[:208,:,:10]
trainX = torch.from_numpy(trainX)
print(trainX.shape)

trainY = outputdata[:208,:,:8]
trainY = torch.from_numpy(trainY)
print(trainY.shape)

testX = inputdata[208:,:,:10]
testX = torch.from_numpy(testX)
print(testX.shape)

testY = outputdata[208:,:,:8]
testY = torch.from_numpy(testY)
print(testY.shape)
#select a subset of the features from the data for testing of the script
#Loads in the input and output numpy arrays and splits into training and validation datasets.

train_dataloader = CustomDataset(trainX, trainY)
test_dataloader = CustomDataset(testX, testY)

The size of each of the tensors is the following:

torch.Size([208, 3, 10])
torch.Size([208, 1, 8])
torch.Size([208, 3, 10])
torch.Size([208, 1, 8])

I set up the program to print the x tensor and got the following:

tensor([[[ 0., 0., 0., …, 0., 0., 0.],
[ 0., 0., 0., …, 0., 0., 0.],
[ 15., 5., 34., …, 4., 11., 1.]],
[[ 0., 0., 0., …, 7., 6., 7.],
[ 6., 15., 0., …, 0., 7., 0.],
[ 1., 5., 17., …, 61., 9., 0.]],
[[ 58., 58., 35., …, 87., 54., 38.],
[ 0., 0., 0., …, 0., 0., 0.],
[ 8., 0., 0., …, 0., 0., 0.]],
…,
[[ 2., 0., 8., …, 3., 2., 2.],
[ 4., 4., 30., …, 5., 0., 23.],
[125., 145., 97., …, 179., 704., 140.]],
[[ 0., 0., 0., …, 1., 0., 0.],
[ 78., 0., 43., …, 62., 0., 64.],
[ 45., 88., 0., …, 0., 15., 0.]],
[[ 0., 0., 0., …, 0., 0., 0.],
[ 0., 0., 0., …, 0., 0., 0.],
[112., 84., 104., …, 133., 125., 75.]]])

The script seems to think this is a tuple and not a tensor. Help would be much appreciated!

Could you post the model definition, please?
Sometimes these errors are caused by an accidental training comma in a line of code, e.g.

x = self.layer(x), 
x = F.relu(x)

(note the comma in the first line of code).