Netwrok does not learns

I’m new in pytorch
I’ve created a simple mlp as folow:

class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.L1 = nn.Linear(6,4)
        self.l1_drop = nn.Dropout(p=0.2)
        self.L2 = nn.Linear(4, 2)
        self.l3_drop = nn.Dropout(p=0.1)
        self.out = nn.Linear(2,1)


    def forward(self,x):
        x= F.relu(self.L1(x))
        x = F.relu(self.L2(x))
        x = F.sigmoid(self.out(x))
        return x


model = MLP().to("cuda:0")
crit = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr = 0.1)

train = pd.read_csv('train.txt',low_memory=False,dtype= float,header=None)
tensorTrain = torch.tensor(train.values, dtype=torch.float32)
inputN = tensorTrain[:,2:-1]
inp = inputN.view(len(inputN),-1,6)
label = tensorTrain[:,-1]
label = label.view([len(label),1])

train_tensor = TensorDataset(inp, label)
train_loader = DataLoader(dataset = train_tensor, batch_size = 1, shuffle = True)
for epoch in range(10):
    for i, (images, labels) in enumerate(train_loader):
        images1 = Variable(images).to("cuda:0")
        print(images1)
        labels = Variable(labels).to("cuda:0")
        labels.float()
        optimizer.zero_grad()
        outputs = model(images1)
        loss = crit(outputs, labels)
        loss.backward()
        optimizer.step()
    print('Epoch [%d/%d], Loss: %.4f'% (epoch + 1, 80, loss.item()))

the input file is :

1,2,1,1,1,1,1,1,1
1,3,1,1,1,1,1,0,0
2,2,1,1,1,1,0,1,0
2,3,1.3,3.2,1,1,1,1,1

but network wont train
is there any problem?

Your code basically looks alright.
You could try the following to make the model learn:

  • lower your learning rate and see if the loss is decreasing (a learning rate of 1e-3 might be a good starter for Adam
  • try to normalize your input. Currently it looks like you are dealing with some kind of categorical data. In that case an nn.Embedding layer might be helpful.

If that doesn’t help, you could try to overfit your model to a single sample. If that’s not working either, there might be a code bug I’m missing.

As a small side note: Variables are deprecated since PyTorch 0.4.0. You can just use tensors in newer versions. :wink: