Accuracy calculation done right?

Hello , I have been training my RNN model with 1897 * 0.75 examples and testing on the rest 1897 * 0.25 , I have two questions :
1- in the code I provided below , is that the right way to calculate the accuracy of the model ?
2- I’m getting a classification accuracy of 90 - 97.5 % is this normal or I have over fitted my model ?
P.S : features = 20 —> classes = 10
and Thank you

iter = 0  
for epoch in range(num_epochs):
    for i, (features, labels) in enumerate(train_loader):
        features=features.type(torch.FloatTensor)
        features = Variable(features)
        #labels = labels.view(-1)
        labels = labels.type(torch.LongTensor) 
        labels = Variable(labels)
        # Clear gradients w.r.t. parameters
        optimizer.zero_grad()
    
        # Forward pass to get output
        # outputs.size() --> hidden_dim, uutput_dim [69,10]
        outputs = model(features)
        
        # Calculate Loss: softmax --> cross entropy loss
        loss = criterion(outputs, labels)
        
        # Getting gradients w.r.t. parameters
        loss.backward()
        
        # Updating parameters
        optimizer.step()
        
        iter += 1
        
        if iter % 500 == 0:
            # Calculate Accuracy         
            correct = 0
            total = 0
            # Iterate through test dataset
            for features, labels in test_loader:
                features=features.type(torch.FloatTensor)
                features = Variable(features)
                
                # Forward pass only to get logits/output
                outputs = model(features)
                
                # Get predictions from the maximum value
                _, predicted = torch.max(outputs.data, 1)
                
                # Total number of labels
                total += labels.size(0)
                
                # Total correct predictions
                correct += (predicted.type(torch.DoubleTensor) == labels).sum()
            
            accuracy = 100 * correct / total
            
            # Print Loss
            print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))