Accuracy does not change across training

Hi, I’m trying to classify web users with simple model of deep learning. Everything seem works except validation test during training… Any one can help?

# define network
class Net(nn.Module): 
    def __init__(self): 
        super(Net,self).__init__() 
        self.loss_fn = nn.CrossEntropyLoss()  # LogSoftmax + ClassNLL Loss 
        
        self.layers = nn.Sequential( 
            nn.Linear(4,30), #4 input "mac source","Destination","source port","dest port"
            nn.ReLU(), 
            nn.Linear(30,60), 
            nn.ReLU(),
            nn.Linear(60,90),       
            #nn.Softmax()   
        )
        
    def forward(self,x): 
        return self.layers(x)
    
    def loss_function(self, net_out, target): 
        return self.loss_fn(net_out, target)
    print('done.')


# train
dateTimeObj_start = datetime.now()
print(dateTimeObj_start)

n = 30
epochs = range(n)

for epoch in epochs:  
        
    net.train()
    mean_loss = 0.
    correct = 0    
    for it,batch in enumerate(train.split(64)): 
        batch = batch.to(device) 
        targets = batch[:,1].long()   
        net_input = batch[:,(0,3,4,5)]  
        optimizer.zero_grad()     
 
        output = net(net_input)
        loss = net.loss_function(output, targets)  
        loss.backward() 
        optimizer.step() 
        mean_loss += loss.item() 
          
        
    net.eval() 
    with torch.no_grad(): 
        correct_eval = 0
        for it_eval,batch_eval in enumerate(validation.split(64)):  
            targets_eval = batch_eval[:,1].long()
            net_input_eval = batch_eval[:,(0,3,4,5)]
            optimizer.zero_grad()
            output_eval = net(net_input_eval)
            _, predicted_eval = torch.max(output_eval.data, 1)#?
            correct_eval += (predicted_eval == targets_eval).sum() 

    
    tot = it_eval*64
    accuracy_eval = 1.*correct_eval/tot
    print(f"Epoch {epoch}, MeanLoss: {mean_loss/it} and Accuracy: {100.*accuracy_eval} %")
                 
#torch.save(net.state_dict(), 'net.pt') 
dateTimeObj_end = datetime.now()
print("Trained in: ", dateTimeObj_end-dateTimeObj_start)
This is the output:
2020-07-27 10:41:17.974960
Epoch 0, MeanLoss: 5.888262160740737 and Accuracy: 0.18356575071811676 %
Epoch 3, MeanLoss: 3.2500329643793187 and Accuracy: 0.18356575071811676 %
Epoch 6, MeanLoss: 3.250030369414957 and Accuracy: 0.18356575071811676 %
Epoch 9, MeanLoss: 3.2500303690934538 and Accuracy: 0.18356575071811676 %
Epoch 12, MeanLoss: 3.250030281859246 and Accuracy: 0.18356575071811676 %
Epoch 15, MeanLoss: 3.2500302255868623 and Accuracy: 0.18356575071811676 %
Epoch 18, MeanLoss: 3.250030313893522 and Accuracy: 0.18356575071811676 %
Epoch 21, MeanLoss: 3.250030384129832 and Accuracy: 0.18356575071811676 %
Epoch 24, MeanLoss: 3.2500307015790586 and Accuracy: 0.18356575071811676 %
Epoch 27, MeanLoss: 3.250030863248852 and Accuracy: 0.18356575071811676 %
Trained in:  0:10:56.363752

Thanks a lot!
Mattia

Hi Mattia,
Since you haven’t shown correct_train & accuracy_train, thus it is hard to say!
First thing you should do is to print targets & prediction output. I think your targets are integer while your predictions are probabilities.
Please make sure you are matching “(predicted_eval == targets_eval)” on similar scale/type/probability thresholds!
Make sure they are of same type (e.g. float32), if they are predicted probabilities then convert them to binary levels as same as targets.

E.g. if your target classes are 0.0 & 1.0

predicted = torch.from_numpy(np.where(output_eval >=0.0, 1.0, 0.0)).to(device)

ignore to(device) if you are not using GPU

correct_eval += predicted.eq(targets_eval).sum().item()

Hope this help!