Train loss decreased but validation loss doesn't change

I use pre-trained ResNet to extract 1000 dimensional features for each image, then put these images into my self-built net to do classification tasks and use triplet loss function.

There is a part of my code:

class Network(torch.nn.Module):
    def __init__(self,n_feature = 1000, n_hidden_1 = 200, n_hidden_2 = 100, n_hidden_3 = 50, n_hidden_4 = 20,n_output = 3):
        super(Network, self).__init__()
        self.net = torch.nn.Sequential(    
                        torch.nn.Linear(n_feature, n_hidden_1),
                        torch.nn.BatchNorm1d(n_hidden_1),
                        torch.nn.ReLU(),
                        torch.nn.Linear(n_hidden_1, n_hidden_2),
                        torch.nn.BatchNorm1d(n_hidden_2),
                        torch.nn.ReLU(),
                        torch.nn.Linear(n_hidden_2, n_hidden_3),
                        torch.nn.BatchNorm1d(n_hidden_3),
                        torch.nn.ReLU(),
                        torch.nn.Linear(n_hidden_3, n_hidden_4),
                        torch.nn.BatchNorm1d(n_hidden_4),
                        torch.nn.ReLU(),
                        torch.nn.Linear(n_hidden_4, n_output),
                        torch.nn.Sigmoid()
                                )
    def forward(self, x):
        x = self.net(x)
        return x

and there are training and validation process:

    for epoch in tqdm(range(n_epoch)):
        model.train()
        for step, (batch_anchor, batch_positive, batch_negative )in enumerate(train_loader):

            anchor_out = model(batch_anchor)
            positive_out = model(batch_positive)
            negative_out = model(batch_negative)

            loss = loss_func(anchor_out, positive_out, negative_out)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

        model.eval()
        for step, (batch_anchor_val, batch_positive_val, batch_negative_val) in enumerate(val_loader):

            anchor_out_val = model(batch_anchor_val)
            positive_out_val = model(batch_positive_val)
            negative_out_val = model(batch_negative_val)

            loss_val = loss_func(anchor_out_val, positive_out_val, negative_out_val)

where I define loss function and optimiser like following:

optimizer = optim.Adam(model.parameters(), lr=0.002)
loss_func = torch.nn.TripletMarginLoss(p=2, margin=1)

and some results:

Epoch: 1/50 - Loss: 0.8764 - Val_loss: 0.9920
Epoch: 2/50 - Loss: 0.7035 - Val_loss: 0.9897
Epoch: 3/50 - Loss: 0.6313 - Val_loss: 0.9972
Epoch: 4/50 - Loss: 0.5958 - Val_loss: 0.9980
Epoch: 5/50 - Loss: 0.5724 - Val_loss: 0.9930
Epoch: 6/50 - Loss: 0.5541 - Val_loss: 1.0123
...

While training, train loss always decrease, but val loss doesn’t change, I don’t know why, maybe someone know what are potential reasons? I have read some blogs, and check my data set, I am sure my data set are correctly split.