Interpretation of loss and accuracy

Hello,
I’m training a CNN network and these are my results for the first 14 epochs:
pyt

As you can see, despite that my train loss rose to 1.3 from epoch 2 to 3, my train accuracy increased, while it should be decreased! (there are other epochs like what I said. for example, between epoch 8 and 9, between epoch 10 and 11). Is this result natural??
note: I applied nn.CrossEntropyLoss as my loss function.

This might be expected, as the loss and accuracy do not strictly depend on each other.
I.e. the range of the logits also defines the loss, while only the argmax logit defines the prediction (and thus changes the accuracy) as seen in this small example:

# setup
criterion = nn.CrossEntropyLoss()

# low loss, 0 acc
output = torch.tensor([[-1., 1.],
                       [-1., 1.]])
target = torch.tensor([0, 0])

loss = criterion(output, target)
print(loss)
> tensor(2.1269)

acc = (torch.argmax(output, dim=1) == target).float().mean()
print(acc)
> tensor(0.)

# high loss, acc=0.5
output = torch.tensor([[1., -1.],
                       [-5., 1.]])
target = torch.tensor([0, 0])
loss = criterion(output, target)
print(loss)
> tensor(3.0647)

acc = (torch.argmax(output, dim=1) == target).float().mean()
print(acc)
> tensor(0.5000)
2 Likes