Basic keras MNIST to pytorch

I have the following code for keras:

# Build Network
network = models.Sequential()
network.add(layers.Dense(392, activation='relu', input_shape=(784,)))
network.add(layers.Dense(10, activation='softmax'))
network.compile(optimizer=optimizers.RMSprop(lr=0.001),
                loss=losses.MSE,
                metrics=['acc'])

# Train
history = network.fit(train_img[:-10000], train_lbl[:-10000], epochs=10, batch_size=196,
                      validation_data=(train_img[-10000:], train_lbl[-10000:]))

Train on 50000 samples, validate on 10000 samples
Epoch 1/10
50000/50000 [==============================] - 1s 22us/step - loss: 0.0349 - acc: 0.7554 - val_loss: 0.0487 - val_acc: 0.6752
Epoch 2/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0228 - acc: 0.8425 - val_loss: 0.0317 - val_acc: 0.7911
Epoch 3/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0200 - acc: 0.8636 - val_loss: 0.0327 - val_acc: 0.7746
Epoch 4/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0182 - acc: 0.8753 - val_loss: 0.0263 - val_acc: 0.8243
Epoch 5/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0171 - acc: 0.8827 - val_loss: 0.0219 - val_acc: 0.8502
Epoch 6/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0163 - acc: 0.8881 - val_loss: 0.0175 - val_acc: 0.8805
Epoch 7/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0154 - acc: 0.8947 - val_loss: 0.0214 - val_acc: 0.8548
Epoch 8/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0149 - acc: 0.8999 - val_loss: 0.0256 - val_acc: 0.8334
Epoch 9/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0144 - acc: 0.9033 - val_loss: 0.0230 - val_acc: 0.8444
Epoch 10/10
50000/50000 [==============================] - 0s 8us/step - loss: 0.0139 - acc: 0.9070 - val_loss: 0.0182 - val_acc: 0.

I would like as close as possible equivalent in pytorch, below is my attempt:

class Sequential(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden1 = nn.Linear(784, 392)
        self.hidden2 = nn.Linear(392, 10)

    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = F.relu(self.hidden2(x))
        x = F.log_softmax(x, dim=1)
        return x

# Build Network
network = Sequential().cuda()
network

Sequential(
  (hidden1): Linear(in_features=784, out_features=392, bias=True)
  (hidden2): Linear(in_features=392, out_features=10, bias=True)
)

optimizer_fn = optim.RMSprop(network.parameters(), lr=0.001)
loss_fn = nn.MSELoss()

# Train
epochs = 10
for epoch in range(1, epochs+1):
    # forward pass
    trained = network(train_img[:-10000])
    loss = loss_fn(trained, train_lbl[:-10000])
    # validation
    val_trained = network(train_img[-10000:])
    val_loss = loss_fn(val_trained, train_lbl[-10000:])
    
    print('Epoch {epoch}/{epochs}. loss: {loss}, val_loss: {val_loss}'.format(**vars()))
    
    # backward pass
    optimizer_fn.zero_grad()
    loss.backward()
    optimizer_fn.step()

Epoch 1/10. loss: 5.864108562469482, val_loss: 5.864132404327393
Epoch 2/10. loss: 6.595648765563965, val_loss: 6.6156511306762695
Epoch 3/10. loss: 5.867150783538818, val_loss: 5.86713981628418
Epoch 4/10. loss: 5.862150192260742, val_loss: 5.862164497375488
Epoch 5/10. loss: 12.059815406799316, val_loss: 12.162103652954102
Epoch 6/10. loss: 5.862414360046387, val_loss: 5.862414836883545
Epoch 7/10. loss: 5.862414360046387, val_loss: 5.862414836883545
Epoch 8/10. loss: 5.862414360046387, val_loss: 5.862414836883545
Epoch 9/10. loss: 5.862414360046387, val_loss: 5.862414836883545
Epoch 10/10. loss: 5.862414360046387, val_loss: 5.862414836883545

I’m obviously doing something wrong. Could someone put me back on track?

Sorry if this question is too basic for these forums.

It looks like the second F.relu on self.hidden2 is not needed.
Also, in the Keras training code you are using softmax as the last activation, while in PyTorch you are using log_softmax. Could you change both and have a look, if the training is approx. equal?

Thanks for the reply.

In keras model I had 2 Dense layers, hence 2 hidden layers in the pytorch example. If I remove F.relu(self.hidden2(x)), then it seems that self.hidden2() is not needed anymore? If so, then I will only have one hidden layer in the pytorch and both examples will not be equivalent?

Reminder, in keras I had:

network.summary()

Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 392)               307720    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                3930      
=================================================================
Total params: 311,650
Trainable params: 311,650
Non-trainable params: 0

Following your suggestions I refactored my pytorch code to the following:

class Sequential(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden1 = nn.Linear(784, 10)

    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = F.softmax(x, dim=1)
        return x

# Build Network
network = Sequential().cuda()
network

Sequential(
  (hidden1): Linear(in_features=784, out_features=10, bias=True)
)

optimizer_fn = optim.RMSprop(network.parameters(), lr=0.001)
loss_fn = nn.MSELoss()

# Train
epochs = 10
for epoch in range(1, epochs+1):
    # forward pass
    trained = network(train_img[:-10000])
    loss = loss_fn(trained, train_lbl[:-10000])
    # validation
    val_trained = network(train_img[-10000:])
    val_loss = loss_fn(val_trained, train_lbl[-10000:])
    
    print('Epoch {epoch}/{epochs}. loss: {loss}, val_loss: {val_loss}'.format(**vars()))
    
    # backward pass
    optimizer_fn.zero_grad()
    loss.backward()
    optimizer_fn.step()

Epoch 1/10. loss: 0.08996006101369858, val_loss: 0.08998527377843857
Epoch 2/10. loss: 0.0835660770535469, val_loss: 0.08396155387163162
Epoch 3/10. loss: 0.07465378940105438, val_loss: 0.07502195239067078
Epoch 4/10. loss: 0.07493914663791656, val_loss: 0.07497056573629379
Epoch 5/10. loss: 0.09106428176164627, val_loss: 0.0923982784152031
Epoch 6/10. loss: 0.07770228385925293, val_loss: 0.07810225337743759
Epoch 7/10. loss: 0.07583754509687424, val_loss: 0.07622469961643219
Epoch 8/10. loss: 0.06659538298845291, val_loss: 0.0668170228600502
Epoch 9/10. loss: 0.06550104916095734, val_loss: 0.06539501249790192
Epoch 10/10. loss: 0.0665767565369606, val_loss: 0.06721500307321548

While in keras the loss value seems to have a clear downwards direction, in my pytorch example it rather fluctuates back and forth.

No, I meant you should remove the relu and keep the second hidden layer.
Try this code:

x = F.relu(self.hidden1(x))
x = self.hidden2(x)
x = F.softmax(x, dim=1)
return x

Refactored again, but it still doesn’t seem right.

class Sequential(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden1 = nn.Linear(784, 392)
        self.hidden2 = nn.Linear(392, 10)

    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = self.hidden2(x)
        x = F.softmax(x, dim=1)
        return x

# Build Network
network = Sequential().cuda()
network

Sequential(
  (hidden1): Linear(in_features=784, out_features=392, bias=True)
  (hidden2): Linear(in_features=392, out_features=10, bias=True)
)

optimizer_fn = optim.RMSprop(network.parameters(), lr=0.001)
loss_fn = nn.MSELoss()

# Train
epochs = 20
for epoch in range(1, epochs+1):
    # forward pass
    trained = network(train_img[:-10000])
    loss = loss_fn(trained, train_lbl[:-10000])
    # validation
    val_trained = network(train_img[-10000:])
    val_loss = loss_fn(val_trained, train_lbl[-10000:])
    
    print('Epoch {epoch}/{epochs}. loss: {loss}, val_loss: {val_loss}'.format(**vars()))
    
    # backward pass
    optimizer_fn.zero_grad()
    loss.backward()
    optimizer_fn.step()

Epoch 1/20. loss: 0.09023910015821457, val_loss: 0.09023372828960419
Epoch 2/20. loss: 0.08889125287532806, val_loss: 0.08902137726545334
Epoch 3/20. loss: 0.09533119946718216, val_loss: 0.0944860428571701
Epoch 4/20. loss: 0.11828577518463135, val_loss: 0.11910293251276016
Epoch 5/20. loss: 0.11870171129703522, val_loss: 0.11937540769577026
Epoch 6/20. loss: 0.14534510672092438, val_loss: 0.14685864746570587
Epoch 7/20. loss: 0.10830487310886383, val_loss: 0.11026407033205032
Epoch 8/20. loss: 0.12849687039852142, val_loss: 0.12968237698078156
Epoch 9/20. loss: 0.1687696874141693, val_loss: 0.17019768059253693
Epoch 10/20. loss: 0.1339481770992279, val_loss: 0.13542285561561584
Epoch 11/20. loss: 0.11552810668945312, val_loss: 0.11694050580263138
Epoch 12/20. loss: 0.13507775962352753, val_loss: 0.1372336596250534
Epoch 13/20. loss: 0.11315662413835526, val_loss: 0.11472103744745255
Epoch 14/20. loss: 0.12789055705070496, val_loss: 0.12905439734458923
Epoch 15/20. loss: 0.131318137049675, val_loss: 0.13293345272541046
Epoch 16/20. loss: 0.11097519099712372, val_loss: 0.11244189739227295
Epoch 17/20. loss: 0.10206971317529678, val_loss: 0.1033577173948288
Epoch 18/20. loss: 0.11592703312635422, val_loss: 0.11761631071567535
Epoch 19/20. loss: 0.09888488054275513, val_loss: 0.10024408251047134
Epoch 20/20. loss: 0.10537845641374588, val_loss: 0.1073707640171051

FYI, if it matters, I’m using: https://www.kaggle.com/zalando-research/fashionmnist

The models look alright now.
While you use a batch size of 196 in Keras, you are using the full training set of all 50000 images for a single update in PyTorch.

Hmmm, I tried on Keras batch_size==50000, I assume results are equivalent to what I see with Pytorch:

history = network.fit(train_img[:-10000], train_lbl[:-10000], epochs=20, batch_size=len(train_img[:-10000]),
                      validation_data=(train_img[-10000:], train_lbl[-10000:]))

Train on 50000 samples, validate on 10000 samples
Epoch 1/20
50000/50000 [==============================] - 1s 18us/step - loss: 0.0942 - acc: 0.0904 - val_loss: 0.0871 - val_acc: 0.2476
Epoch 2/20
50000/50000 [==============================] - 0s 5us/step - loss: 0.0872 - acc: 0.2480 - val_loss: 0.0788 - val_acc: 0.4158
Epoch 3/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0785 - acc: 0.4221 - val_loss: 0.0766 - val_acc: 0.4163
Epoch 4/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0763 - acc: 0.4157 - val_loss: 0.0861 - val_acc: 0.3423
Epoch 5/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0857 - acc: 0.3420 - val_loss: 0.1197 - val_acc: 0.2763
Epoch 6/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.1179 - acc: 0.2858 - val_loss: 0.0955 - val_acc: 0.3331
Epoch 7/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0944 - acc: 0.3383 - val_loss: 0.1169 - val_acc: 0.2952
Epoch 8/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.1149 - acc: 0.3047 - val_loss: 0.0852 - val_acc: 0.3785
Epoch 9/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0836 - acc: 0.3928 - val_loss: 0.0884 - val_acc: 0.4178
Epoch 10/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0866 - acc: 0.4252 - val_loss: 0.0941 - val_acc: 0.3858
Epoch 11/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0926 - acc: 0.3962 - val_loss: 0.0699 - val_acc: 0.4926
Epoch 12/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0693 - acc: 0.4972 - val_loss: 0.0704 - val_acc: 0.4493
Epoch 13/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0696 - acc: 0.4587 - val_loss: 0.0641 - val_acc: 0.5260
Epoch 14/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0636 - acc: 0.5310 - val_loss: 0.0627 - val_acc: 0.5468
Epoch 15/20
50000/50000 [==============================] - 0s 5us/step - loss: 0.0624 - acc: 0.5495 - val_loss: 0.0596 - val_acc: 0.5798
Epoch 16/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0589 - acc: 0.5861 - val_loss: 0.0572 - val_acc: 0.5897
Epoch 17/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0569 - acc: 0.5889 - val_loss: 0.0564 - val_acc: 0.6069
Epoch 18/20
50000/50000 [==============================] - 0s 5us/step - loss: 0.0559 - acc: 0.6139 - val_loss: 0.0604 - val_acc: 0.5876
Epoch 19/20
50000/50000 [==============================] - 0s 4us/step - loss: 0.0602 - acc: 0.5884 - val_loss: 0.0587 - val_acc: 0.5717
Epoch 20/20
50000/50000 [==============================] - 0s 5us/step - loss: 0.0581 - acc: 0.5802 - val_loss: 0.0588 - val_acc: 0.5824

That’s good so far. However, it would be better to adapt the PyTorch code to match the Keras one.
I tried to use the same training procedure as you’ve used in Keras.
Could you have a look at this code:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.hidden1 = nn.Linear(784, 392)
        self.hidden2 = nn.Linear(392, 10)

    def forward(self, x):
        x = F.relu(self.hidden1(x))
        x = self.hidden2(x)
        x = F.softmax(x, dim=1)
        return x

model = MyModel()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(model.parameters(), lr=0.001)


train_dataset = datasets.MNIST(
    root='PATH',
    train=True,
    transform=transforms.ToTensor()
)

val_dataset = datasets.MNIST(
    root='PATH',
    train=False,
    transform=transforms.ToTensor()
)

train_loader = DataLoader(
    train_dataset,
    batch_size=196,
    shuffle=True,
    num_workers=2
)

val_loader = DataLoader(
    val_dataset,
    batch_size=196,
    shuffle=False,
    num_workers=2
)


for epoch in range(10):
    train_loss = 0.
    val_loss = 0.
    train_acc = 0.
    val_acc = 0.
    
    for data, target in train_loader:
        # Transform target to one-hot encoding, since Keras uses MSELoss
        target = torch.zeros(data.size(0), 10).scatter_(1, target[:, None], 1.)
        
        optimizer.zero_grad()
        output = model(data.view(data.size(0), -1))
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()
        
        train_loss += loss.item()
        train_acc += (torch.argmax(output, 1) == torch.argmax(target, 1)).float().sum()
    
    with torch.no_grad():
        for data, target in val_loader:
            target = torch.zeros(data.size(0), 10).scatter_(1, target[:, None], 1.)
        
            output = model(data.view(data.size(0), -1))
            loss = criterion(output, target)
            
            val_loss += loss.item()
            val_acc += (torch.argmax(output, 1) == torch.argmax(target, 1)).float().sum()
    
    train_loss /= len(train_loader)
    train_acc /= len(train_dataset)
    val_loss /= len(val_loader)
    val_acc /= len(val_dataset)
   
    print('Epoch {}, train_loss {}, val_loss {}, train_acc {}, val_acc {}'.format(
        epoch, train_loss, val_loss, train_acc, val_acc))

Using this code, I get the following final stats:

Epoch 9, train_loss 0.0011337, val_loss 0.0028658, train_acc 0.9936, val_acc 0.9808

EDIT: I realized one difference: While you are using a custom split of 50000:10000 using the train data, I’ve used the test data for validation. Let me know, if you would need help implementing this.

Thank you for the example. One other difference is that you’ve used MNIST, whilst in my code I was training on the FashionMNIST dataset, presumably that’s why your validation accuracy in the end reaches ~98%, while in my Keras network it was only ~88% (+10% boost in your case, w00t!).

I have to spend a bit more time playing with your code, as not all Pytorch ecosystem is known to me, for instance I just now discovered torchvision.datasets.FashionMNIST or torch.utils.data.DataLoader :slight_smile:

Thanks

Haha, I was also wondering, why the Keras code was that worse.
I just assumed you’ve used MNIST based on the topic. :wink:

Yes, the FashionMNIST dataset is also available via torchvision.datasets. The code should also work for FashionMNIST just by chaning the name of the dataset.
The tutorials are also very helpful to get started.