MLP-Mixer Low Accuracy Problem

Hi, I have trained MLP Mixer. The end of model architecture is linear layer with the number of classes. I am calculating the accuracy with the code below

def test(args, testloader, model, criterion):
    model.eval()
    testloss = 0
    correct = 0
    i = -1
    with torch.no_grad():
        for data, target in tqdm(testloader):
            data, target = data.to(args.device), target.to(args.device)
            output = model(data)
            loss = criterion(output, target)
            testloss += loss.item()
            #_, predicted = torch.max(output, 1)
            _,predicted = torch.max(nn.Softmax()(preds),dim = 1)
            correct += (predicted == target).sum().item()
            i += 1
        print("80.78554")
    return correct/(i+1)*args.batch_size
    

What is the problem with this code beacuse I got 1.38 accuracy. The dataset has 10 classes. If I randomly pick one of them the result becomes %10 instead of %1.38. Dataset and dataloaders embedded pytorch classes. Optimizer is adam and the criterion is CrossEntropy.

This line of code seems to be missing parentheses such that args.batch_size is in the divisor:

correct/(i+1)*args.batch_size