Need sugesstions on getting accuracy

So my label looks like this

_,labels = next(iter(dataloaders['val']))
labels

How I should change the keywords in the training loop to get accuracy ?

for epoch in range(EPOCHS):
        print('Epoch ', epoch,'/',EPOCHS-1)
        print('-'*15)

        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0.0

            # Iterate over data.
            for inputs,labels in dataloaders[phase]:
                inputs = inputs.to(device, dtype=torch.float)
                labels = labels.to(device, dtype=torch.float)

                # zero the parameter gradients
                optimizer.zero_grad()

                with torch.set_grad_enabled(phase == 'train'):

                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)

                    loss = loss_fn(outputs,labels)

                    # we backpropagate to set our learning parameters only in training mode
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                running_loss += loss.item() * inputs.size(0)
                
                if phase == 'val':
                    running_corrects += torch.sum(preds == torch.argmax(labels.data, dim=1))

            # scheduler for weight decay
            if phase == 'train':
                scheduler.step()
                epoch_loss = running_loss / float(dataset_sizes[phase])
                xm.master_print('{} Loss: {:.4f}'.format(phase, epoch_loss))
            else:
                epoch_loss = running_loss / float(dataset_sizes[phase])
                epoch_acc = running_corrects / float(dataset_sizes[phase])
                xm.master_print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))

link to my notebook

epoch_acc seems to already calculate the accuracy for the validation loop.
Are you seeing unexpected values or any other issues?

It is breaking after the end of 1st training loop . Have a look to my notebook.

Could you check, if your code si running fine without using multiprocessing via:

xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=1, start_method='fork')

I am runnig this notebook in integration with torch xla with Tpu …if I remove that line it is giving me error .
Can you try copy and edit this notebook

I changed the code a little bit and it started working . But the loss is still to high ; What should I make changes in model class to improve the accuracy ? I am Training 51000 images with a batch size of 256 sized at 224 x 224 , for 25 epochs.
Notebook Link

import torch.nn as nn
import torch.nn.functional as F

class EfficientNet_b0(nn.Module):
    def __init__(self):
        super(EfficientNet_b0, self).__init__()
        self.model = efficientnet_pytorch.EfficientNet.from_pretrained('efficientnet-b0')
        
        self.classifier_layer = nn.Sequential(
            nn.Linear(1280 , 512),
            nn.BatchNorm1d(512),
            nn.Dropout(0.2),
            nn.Linear(512 , 256),
            nn.Linear(256 , 104)
        )
        
    def forward(self, inputs):
        x = self.model.extract_features(inputs)

        # Pooling and final linear layer
        x = self.model._avg_pooling(x)
        x = x.flatten(start_dim=1)
        x = self.model._dropout(x)
        x = self.classifier_layer(x)
        return x
    
model = EfficientNet_b0()
model = model.to(device)

You might want to add activation functions into classifier_layer. If that doesn’t help, try to overfit a small data sample (e.g. just 10 samples) by playing around with some hyperparameters. Once this is done you could try to scale up the use case again.

I changed the model after your comments but it’s not heling me out rather the accuracy is decreasing
Need help in how to increase the accuracy in retrained mdels using pytorch ? Any good references will also help!
Notebook Link

class EfficientNet_b0(nn.Module):
    def __init__(self):
        super(EfficientNet_b0, self).__init__()
        self.model = efficientnet_pytorch.EfficientNet.from_pretrained('efficientnet-b0')
        
        self.dense_layer_1 = nn.Linear(1280 , 512)
        self.batchNorm_layer = nn.BatchNorm1d(512)
        self.droput_layer = nn.Dropout(0.2)
        
        self.dense_layer_2 = nn.Linear(512,216)
        
        self.dense_layer_3 = nn.Linear(216,104)
        
        
    def forward(self, inputs):
        x = self.model.extract_features(inputs)

        # Pooling and final linear layer
        x = self.model._avg_pooling(x)
        x = x.flatten(start_dim=1)
        x = self.model._dropout(x)
        
        x = self.dense_layer_1(x)
        x = self.batchNorm_layer(x)
        x = self.droput_layer(x)
        
        x = self.dense_layer_2(x)
        x = torch.relu(x)
        
        x = self.dense_layer_3(x)
        x = torch.log_softmax(x , dim =1)
        return x
    
model = EfficientNet_b0()

Did you try to overfit the small data samples and is the output the result of this test?