I used inception v3 pretained model for a dataset. in training i used
outputs, secondOut = model(images)
loss1 = criterion(outputs, labels)
loss2 = criterion(secondOut,labels)
loss = loss1 + loss2 * 0.4
training accuracy reached upto 98% and validation is reached upto 90%
but when i test the saved model which i got after training, testing accuracy is just 2% can anyone tell me the reason
This is testing code
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
model = model.to(device)
Set the model to evaluation mode
model.eval()
Initialize variables for TP, TN, FP, FN
TP = TN = FP = FN = 0
FP_sum = FN_sum = 0
Initialize the prediction and target arrays
all_predictions =
all_targets =
Iterate over the test dataset and calculate TP, TN, FP, FN
with torch.no_grad():
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs, 1)
# Convert labels and predictions to CPU
labels_cpu = labels.cpu().numpy()
predicted_cpu = predicted.cpu().numpy()
cm = confusion_matrix(labels_cpu, predicted_cpu, labels=range(num_classes_test))
TP += np.sum(np.diag(cm))
# Append predictions and targets to the arrays
all_predictions.extend(predicted_cpu)
all_targets.extend(labels_cpu)
print(f’True Positives: {TP}')