Why my loss is oscillating?

I am having small training dataset (about 30 npz files that contain human poses) and I built small 1D CNN:

class ConvNet1D(nn.Module):
    def __init__(self):
        super().__init__()
        self.layer1 = nn.Sequential(
            nn.Conv1d(in_features, 64, kernel_size=3),
            nn.ReLU(),
            nn.Dropout(0.5),
            nn.MaxPool1d(10))
        self.layer2 = nn.Flatten()
        self.layer3 = nn.Sequential(
            nn.Linear(384,100),
            nn.ReLU())
        self.layer4 = nn.Sequential(
            nn.Linear(100,8))
            #nn.Softmax())

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)
        return out

I am using Cross Entropy Loss and Adam Optimizer with lr = 0.01, but my model shows oscillating loss.

I am not sure if my outputs and targets are in the correct format for calculating loss, but here are examples of them:

targets = tensor([5, 5, 0, 6, 5, 5, 5, 0, 2])
outputs =
tensor([[-0.2179,  0.0641, -0.0463,  0.1494, -0.1692, -0.0304, -0.0265,  0.0780],
        [-0.2389,  0.0690, -0.0377,  0.1289, -0.0495, -0.0376, -0.0252,  0.0188],
        [-0.1777,  0.1391, -0.0742,  0.1591, -0.1218, -0.0282, -0.0075,  0.0602],
        [-0.0500,  0.0697,  0.0477,  0.0617, -0.0887, -0.0537,  0.0594,  0.0357],
        [-0.2273,  0.0133, -0.0033,  0.1654, -0.1153, -0.0388, -0.0638,  0.0832],
        [-0.2053,  0.0420, -0.0301,  0.1187, -0.1064, -0.0643,  0.0074,  0.0473],
        [-0.1886,  0.1372, -0.0724,  0.1396, -0.1131, -0.0296, -0.0240,  0.0576],
        [-0.0500,  0.0697,  0.0477,  0.0617, -0.0887, -0.0537,  0.0594,  0.0357],
        [-0.1798,  0.1470, -0.0890,  0.1562, -0.1424, -0.0186, -0.0449,  0.0609]],

Could anyone give me some tipp, why does my model shows this oscilating loss? Can small dataset be the reason for that?

Thanks!

Small datasets can indeed cause oscillating losses because the model is struggling to generalize. Another might be that your model is too big so it’s usually helpful to try with something small and see if you can overfit first. Finally try a smaller learning rate