IndexError: dimension specified as 0 but tensor has no dimensions

Hello I am making a convnet in pytorch. I am using the cats and dogs datasets of microsoft kaggle and I am following sentdex tutorial -
https://pythonprogramming.net/convnet-model-deep-learning-neural-network-pytorch/
So here is my code for accuracy -


   def accuracy(self, X, y):
		correct = 0
		total = 0
		with torch.no_grad():
			for i in range(len(X)):
				x = X[i]
				y = y[i]
				output = self.forward(x.view(-1, 1, 50, 50))
				if torch.argmax(output) == torch.argmax(y):
					correct += 1
				total += 1
			accuracy = correct/total
			return accuracy

And here is my code to train my model -

    
    def train(self):
        optimizer = optim.Adam(net.parameters(), lr = 0.001)
        loss_function = nn.MSELoss()
        for epoch in range(self.epochs):
            print('Epoch: ', epoch + 1)
            for i in tqdm(range(0, len(train_X), self.batch_size)):
                 # print(i, i+self.batch_size)
                batch_x = train_X[i: i + self.batch_size].view(-1, 1, 50, 50)
                batch_y = train_y[i:i + self.batch_size]
                net.zero_grad()
                outputs = self.forward(batch_x)
                loss = loss_function(outputs, batch_y)
                loss.backward()
                self.opt.step()
            print('Training Data Accuracy: ', self.accuracy(train_X, train_y))

So if you see the last line here of my code in the train function. You see it is actually printing the training data accuracy. So now when I run this code it gives me the following error. -

Traceback (most recent call last):
  File "3.Convnets_in_Pytorch.py", line 119, in <module>
    net.train()
  File "3.Convnets_in_Pytorch.py", line 109, in train
    print('Testing Data Accuracy: ', self.accuracy(test_X, test_y))
  File "3.Convnets_in_Pytorch.py", line 86, in accuracy
    y = y[i]
IndexError: dimension specified as 0 but tensor has no dimensions

It is occuring with train_y and not with train_X. These are 2 variables which are having my training data. Here is my code which I have used to load my data so anyone looking at this post will have a better insight of what these are -

# Loading in our data
training_data = np.load('training_data.npy', allow_pickle = True)
X = torch.Tensor([i[0] for i in training_data]).view(-1, 50, 50)

# Normalizing data
X =  X/255.0
y = torch.Tensor([i[1] for i in training_data])
print(y.shape)
# Just to find how many elements are there in 10 percent test split from X
val_pct = 0.1
val_size = int(len(X) *  val_pct)

# Train test split
train_X = X[:-val_size]
train_y = y[:-val_size]
print(train_y.shape)
test_X = X[-val_size:]
test_y = y[-val_size:]

If i print the shape of y -

torch.Size([25004, 2])

This how my train_y looks like -

tensor([[0., 1.],
        [0., 1.],
        [1., 0.],
        ...,
        [1., 0.],
        [0., 1.],
        [0., 1.]])

and when I print the shape of train y -

torch.Size([22504, 2])

So when the shapes are such then it is fit to do indexing then why is pytorch giving me the error. Please tell me what I should do in such a scenario

Could you print the complete stack trace and which line of code is throwing this error?