Not expected output in loss function

Hi to all
I am doing Machine learning and when I am doing the loss calculation I am getting 0 losses with accuracy 100%. I don’t expect that output, but I don’t understand why its output become 0 loss with accuracy 100%.

Any idea with this ?

My model :

class Net(nn.Module):
    def __init__(self,input_dim=7,hidden_dim = 20, output_dim=1):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.fc2 = nn.Linear(hidden_dim, hidden_dim)
        self.fc3 = nn.Linear(hidden_dim, hidden_dim)
        self.fc4 = nn.Linear(hidden_dim, hidden_dim)
        self.fc5 = nn.Linear(hidden_dim, hidden_dim)
        self.fc6 = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        x = self.fc1(x).clamp(min=0)
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.relu(self.fc4(x))
        x = F.relu(self.fc5(x))
        x = self.fc6(x)
        return x

My loss function that I am using.

model = Net(x.shape[1])
criterion = torch.nn.BCEWithLogitsLoss()
#criterion = torch.nn.CrossEntropyLoss()
#optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
num_epochs = 200
# Train the network 
for epoch in range(num_epochs):
    for inputs,labels in train_loader:
        
        inputs = inputs.float()
        labels = labels.float()
        
        # Feed Forward
        output = model(inputs)
        
        # Loss Calculation
        loss_train = criterion(output, labels)
        train_loss.append(loss_train)
        
        # Clear the gradient buffer (we don't want to accumulate gradients)
        optimizer.zero_grad()
        
        # Backpropagation 
        loss_train.backward()
        
        # Weight Update: w <-- w - lr * gradient
        optimizer.step()
        
    #Accuracy
    # Since we are using a sigmoid, we will need to perform some thresholding
    output = (output>0.5).float()
    # Accuracy: (output == labels).float().sum() / output.shape[0]
    accuracy = (output == labels).float().mean()
    # Print statistics 
    print("Epoch {}/{}, Loss: {:.3f}, Accuracy: {:.3f}".format(epoch+1,num_epochs, loss_train, accuracy))

The output :

Epoch 1/200, Loss: 0.000, Accuracy: 1.000
Epoch 2/200, Loss: 0.000, Accuracy: 1.000
Epoch 3/200, Loss: 0.000, Accuracy: 1.000
Epoch 4/200, Loss: 0.000, Accuracy: 1.000
Epoch 5/200, Loss: 0.000, Accuracy: 1.000
Epoch 6/200, Loss: 0.000, Accuracy: 1.000
Epoch 7/200, Loss: 0.000, Accuracy: 1.000
Epoch 8/200, Loss: 0.000, Accuracy: 1.000
Epoch 9/200, Loss: 0.000, Accuracy: 1.000
Epoch 10/200, Loss: 0.000, Accuracy: 1.000
.
.
.
.
.

Any body explain that to me?

Thanks in advance.

Can you post the model?
Have you checked if your model backprops properly good?
You may be breaking brackprop and for some reason always getting same output.
Is your output varying?

I edited the post I posted the model