Some problem about valuation model(about the torch.nn.Module.eval() method)

The code is below:

import torch
import matplotlib.pyplot as plt
from torch.autograd import Variable

# torch.manual_seed(1)    # reproducible

N_HIDDEN = 300

# training data
x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)
y = x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))

x = Variable(x)
y = Variable(y)

# test data
test_x = torch.unsqueeze(torch.linspace(-1, 1, N_SAMPLES), 1)
test_y = test_x + 0.3*torch.normal(torch.zeros(N_SAMPLES, 1), torch.ones(N_SAMPLES, 1))

test_x = Variable(test_x)
test_y = Variable(test_y)

# show data
plt.scatter(,, c='magenta', s=50, alpha=0.5, label='train')
plt.scatter(,, c='cyan', s=50, alpha=0.5, label='test')
plt.legend(loc='upper left')
plt.ylim((-2.5, 2.5))

net_overfitting = torch.nn.Sequential(
    torch.nn.Linear(1, N_HIDDEN),
    torch.nn.Linear(N_HIDDEN, N_HIDDEN),
    torch.nn.Linear(N_HIDDEN, 1),

net_dropped = torch.nn.Sequential(
    torch.nn.Linear(1, N_HIDDEN),
    torch.nn.Dropout(0.5),  # drop 50% of the neuron
    torch.nn.Linear(N_HIDDEN, N_HIDDEN),
    torch.nn.Dropout(0.5),  # drop 50% of the neuron
    torch.nn.Linear(N_HIDDEN, 1),

print(net_overfitting)  # net architecture

optimizer_ofit = torch.optim.Adam(net_overfitting.parameters(), lr=0.01)
optimizer_drop = torch.optim.Adam(net_dropped.parameters(), lr=0.01)
loss_func = torch.nn.MSELoss()

plt.ion()   # something about plotting

for t in range(500):
    pred_ofit = net_overfitting(x)
    pred_drop = net_dropped(x)
    loss_ofit = loss_func(pred_ofit, y)
    loss_drop = loss_func(pred_drop, y)


    if t % 10 == 0:
        # change to eval mode in order to fix drop out effect
        net_dropped.eval()  # parameters for dropout differ from train mode

        # plotting
        test_pred_ofit = net_overfitting(test_x)
        test_pred_drop = net_dropped(test_x)
        plt.scatter(,, c='magenta', s=50, alpha=0.3, label='train')
        plt.scatter(,, c='cyan', s=50, alpha=0.3, label='test')
        plt.plot(,, 'r-', lw=3, label='overfitting')
        plt.plot(,, 'b--', lw=3, label='dropout(50%)')
        plt.text(0, -1.2, 'overfitting loss=%.4f' % loss_func(test_pred_ofit, test_y).data.numpy(), fontdict={'size': 20, 'color':  'red'})
        plt.text(0, -1.5, 'dropout loss=%.4f' % loss_func(test_pred_drop, test_y).data.numpy(), fontdict={'size': 20, 'color': 'blue'})
        plt.legend(loc='upper left'); plt.ylim((-2.5, 2.5));plt.pause(0.1)

        # change back to train mode


Run this program and get the following result:

Here is my question:

1.what is valuation model?Why should I use these two lines of code to enter the evaluation mode?

  1. Is the curve above for the training set or the test set?
    Through the image, it can be seen that the curve is probably fitted by the training set.But the code is written like this:
plt.plot(,, 'r-', lw=3, label='overfitting')
plt.plot(,, 'b--', lw=3, label='dropout(50%)')

So I am very confused, who can tell me what is going on, why is this, thank you very much!

I hope to get a detailed answer, thank you very much!

  1. eval() changes the Module to evaluation, i.e. some layers like Dropout and BatchNorm change their behavior. In the case of Dropout, the connections won’t be dropped but scaled.

  2. The curves show the predictions on the test data. As you can see, the “overfitted” model predicts the samples closer to the training data as it’s overfitting.
    Generally both your data sets, train and test, are sampled using the same function, so you will see your model is overfitting if the predictions get very close to the actual training data and fail to generalize the underlying function.

1 Like

Thank you very much:grin: