Calculating the accuracy metric

Hi guys,
I am trying to calculate the accuracy of my trained model(for multi-class image classification).
My labels are 1, 2, 3, 4.
My __getitem__ in data loader class looks like below -

 def __getitem__(self, index):
    """Generate one sample of data."""
    ID = self.list_IDs[index]
    image = Image.open(os.path.join(self.dir, ID))
    y = self.labels[ID]
    if self.train_transform:
        image = self.train_transforms(image)
    else:
        image = self.valid_transforms(image)
    img = np.array(image)
    return img, y

My test accuracy and loss function looks like below -

for batch_idx,(data,target) in enumerate(test_loader):
    output = model_transfer(data).squeeze()
    output = torch.unsqueeze(output,0)
    loss = criterion(output, target)
    avg_valid_loss += (1/(batch_idx+1)) * ((loss.data) - avg_valid_loss)
    pred = output.data.max(1, keepdim=True)[1]
    correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
    total += data.size(0)

 accuracy = 100.*(correct/total)

Am I calculating the accuracy correctly?
Also, regarding the labels are 1, 2, 3, 4, should I convert them into some form of one-hot encoding or Is network able to predict it generally?
Any other suggestion regarding it is also welcome?