My CNN model show weird accuracy and some time even did not show accuracy. i am looking for the solution for a long time please help

a =torch.tensor([])
total_step = len(xy)
loss_list = []

acc_list = []
for epoch in range(num_epochs):
#for data in x_loader:
total_trained=0
correctly_trained=0

for i in x_train,y_train:
    
    #criterion(outputs, y)

    outputs = model(x_train)
    
    loss = criterion(outputs, torch.max(y_train, 1)[1])
    loss_list.append(loss.item())
    
    

    # Backprop and perform Adam optimisation
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
#optimizer update the weight parameters to minimize the loss function
    total_trained += y_train.size(0)
    
    _ , predicted_labels = torch.max(outputs.data,1)

    correctly_trained += (y_train.data==predicted_labels).sum()

#Priniting the Accuracy and loss values for training data
print("Epoch:%d"%(epoch))
print("Training...")
print(" Accuracy:%f loss:%f"%((correctly_trained/total_trained)*100,loss))

this time the output is like this.

Epoch:0
Training…
Accuracy:116800.000000 loss:0.000000
Epoch:1
Training…
Accuracy:116800.000000 loss:0.000000

Could you check the shape of y_train and predicted_labels and make sure that no broadcasting takes place?
Also, check the type() of the comparison and cast the result to long() or float() if necessary to avoid an overflow. (If I recall correctly, the result might be an uint8 in older PyTorch versions).

PS: Don’t use the .data attribute, as it might have unwanted side effects.

Inferring from the fact that you coded torch.max(y_train, 1)[1]
y_train is a label of shape (N, Classes).

correctly_trained += (y_train.data==predicted_labels).sum()
is accumulating N * Classes instances, while total_trained += y_train.size(0) only accumulates N instances.

I think that’s why you get accuracy bigger than 1.

y_train shape is
torch.Size([2374, 1])

For this shape, torch.max(y_train, 1) will yield a tensor full of zeros.
Which criterion are you using and what values does y_train contain?

criterion = nn.CrossEntropyLoss()
y_train contain 1s and 0s

In that case you should remove dim1 from the target via y_train = y_train.suqeeze(1) and also not call torch.max(y_train, 1), but pass the target directly to the criterion instead.

Target 1 is out of bounds.

For your target containing values in [0, 1] and nn.CrossEntropyLoss, the output of your model should have the shape [batch_size, 2]. Could you check that and rerun the code again, please?

i am still stuck here. The total training set is 2374 and my model produce the correct output 2841678. whats the problem, please help?

Now its showing “too many values to unpack (expected 2)” error

Here is the working solution-

train_iterator = torch.utils.data.TensorDataset(x_train, y_train.reshape(-1).long())
train_data = torch.utils.data.DataLoader(train_iterator, batch_size = 64, shuffle = True)

for x, y in train_data:
    break
print(x.shape, y.shape) # will print (64, 1, 100), (64,)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())

epochs = 20
batches = len(train_data)
# number of batches in your dataset (where each batch contains 64 examples from your training data)

for epoch in range(epochs):
    epoch_loss = 0.0
    epoch_accuracy = 0.0
    for features, labels in train_data:
        outputs = model(features)
        loss = criterion(outputs, labels)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        epoch_loss += loss.item() # here we are accumulating loss for each training batch.
        epoch_accuracy += (outputs.argmax(1) == labels).float().mean()

    print(f'Epoch: {epoch} -> Loss: {(epoch_loss/batches):.8f}, Accuracy: {(epoch_accuracy/batches):.8f}')
    # dividing epoch_loss by number of batches in our train_data to get mean epoch_loss
    # and printing epoch loss round of to 8 decimals.... same for epoch_accuracy

Ok brother i have done it and its working perfectly.