Unusual accuracy result

def training(train_model, number_of_epochs=3, batch_size=10, learning_rate=1e-3):

criterion = nn.MSELoss() # mean square error loss
optimizer = torch.optim.Adam(train_model.parameters(),
                             lr=learning_rate, 
                             weight_decay=1e-5) # <--




for epoch in range(number_of_epochs):  # loop over the dataset multiple times
    
    running_loss = 0
    total_train = 0
    correct_train = 0
    for i, data in enumerate(traindataloader):
        # get the inputs
        t_image, mask = data
       # print(t_image)
       # print(mask)
        t_image, mask = Variable(t_image), Variable(mask)
    
        print(type(mask))
        # zeroes the gradient buffers of all parameters
        optimizer.zero_grad()
        # forward + backward + optimize
        outputs = train_model(t_image) # forward
       # print(type(outputs))
        
        loss = criterion(outputs,t_image ) # calculate the loss
        loss.backward() # back propagation
        optimizer.step() # update gradients
        running_loss += loss.item()
        
        # accuracy
        _, predicted = torch.max(outputs.data, 1)
       # print(predicted)
        total_train += t_image.nelement()
        
        correct_train +=(predicted==t_image).sum().item()
        print(correct_train)
        print(total_train)
        train_accuracy =100*(correct_train / total_train) 
       #avg_accuracy = train_accuracy / len(train_loader)                                     
        
        print("Epoch {}, train Loss: {:.3f},Training Accuracy = {}".format(epoch, loss.item(),train_accuracy))

Result: The accuracy is 966.73 which is, I understand is because of the division of correct_train and total_train, but the accuracy should not be like that, why it is so ???
<class ‘torch.Tensor’>
7424540
768000
Epoch 0, train Loss: 0.044,Training Accuracy = 966.7369791666667
<class ‘torch.Tensor’>
14163910
1536000
Epoch 0, train Loss: 0.041,Training Accuracy = 922.1295572916666
<class ‘torch.Tensor’>

Are you working on a segmentation use case?
If so, then t_image.nelement would return the number of pixels in the input batch including the color channels, which might be wrong.
Also, are you using an older PyTorch version, as you are still using Variables?
If that’s the case, check if (predicted==t_image).sum() might be overflowing, as this was an issue in old PyTorch versions (pre 1.0, if I’m not mistaken).

I am using PyTorch 1.4.0 and I am not sure about the segmentation use case thing, however, predicted results the tensor tensor like this
([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]],
correct_train = 7424540
total_train= 768000

Could you explain your use case, please?
I.e. what does the output of your model represent, which shapes do your output and target have?

The dataset is of images and the final output I am expecting is the feature extraction of those images. and I have not stored anything in target, because that is giving me error of singleton dimension.

What are you comparing the extracted features to, if you don’t store anything in the target?

I am comparing it with the original images in the trainload

Could you post the shapes of all tensors involved in these operations, please?

loss = criterion(outputs,t_image )
(predicted==t_image)

Below are the outputs and thank you so much
result of t_image = tensor([[[[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
…,
[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.]]],

    [[[0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      ...,
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.]]],
 -----

results of Outputs : tensor([[[[0.2135, 0.2111, 0.1918, …, 0.1948, 0.2247, 0.2309],
[0.1966, 0.2167, 0.2018, …, 0.2265, 0.2696, 0.2443],
[0.1625, 0.2062, 0.1870, …, 0.2024, 0.2853, 0.2419],
…,
[0.1735, 0.2144, 0.2097, …, 0.2037, 0.2798, 0.2286],
[0.2016, 0.2447, 0.2487, …, 0.2459, 0.2944, 0.2489],
[0.2133, 0.2369, 0.2289, …, 0.2182, 0.2556, 0.2369]]],

    [[[0.2135, 0.2111, 0.1918,  ..., 0.1949, 0.2247, 0.2309],
      [0.1966, 0.2167, 0.2018,  ..., 0.2265, 0.2696, 0.2443],
      [0.1625, 0.2062, 0.1870,  ..., 0.2024, 0.2853, 0.2419],
      ...,
      [0.1735, 0.2144, 0.2097,  ..., 0.2038, 0.2799, 0.2287],
      [0.2016, 0.2447, 0.2487,  ..., 0.2459, 0.2944, 0.2489],
      [0.2133, 0.2369, 0.2290,  ..., 0.2182, 0.2556, 0.2369]]],


    [[[0.2135, 0.2109, 0.1915,  ..., 0.1949, 0.2246, 0.2309],
      [0.1965, 0.2172, 0.2023,  ..., 0.2268, 0.2692, 0.2439],
      [0.1625, 0.2061, 0.1867,  ..., 0.2028, 0.2851, 0.2419],
      ...,
      [0.1733, 0.2132, 0.2092,  ..., 0.2038, 0.2800, 0.2292],
      [0.2020, 0.2442, 0.2484,  ..., 0.2467, 0.2951, 0.2496],
      [0.2134, 0.2364, 0.2287,  ..., 0.2185, 0.2558, 0.2368]]],


    ...,


    [[[0.2135, 0.2111, 0.1920,  ..., 0.1949, 0.2247, 0.2309],
      [0.1966, 0.2167, 0.2021,  ..., 0.2264, 0.2696, 0.2443],
      [0.1625, 0.2062, 0.1868,  ..., 0.2024, 0.2852, 0.2418],
      ...,

loss = tensor(0.0259, grad_fn=)
tensor(0.0315, grad_fn=)

correct_train= 7479690
total_train = 768000
train_accuracy = 973.91796875
predicted = tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]],

    [[0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0],

Could you post the shapes only, not the values, via print(tensor.shape)?

print(loss.shape)

torch.Size()
tensor([[[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
…,
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0],
[0, 0, 0, …, 0, 0, 0]],

    [[0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0],
     ...,
     [0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0],
     [0, 0, 0,  ..., 0, 0, 0]],

##############################
print(predicted.shape)
torch.Size([10, 240, 320])

###########################
print(t_image.shape)
torch.Size([10, 1, 240, 320])
############################

print(outputs.shape)
torch.Size([10, 1, 240, 320])