[Newibie] is what i did correct ?(Train/Test Loss)

I’am relatively new to PyTorch and Deep Learning, I followed the Udacity introduction to PyTorch course, and decided to apply what i’ve learned to the famous iris dataset.
I tired to do it as they did in the course to train the fashion-MNIST dataset(a feed forward neural network).
That’s what i did:(it may not be that beautiful, i know xD)

import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from loader import CustomLoader2 as cl
from torch.utils.data import Dataset, DataLoader
torch.manual_seed(1234)

BATCH = 6


trainset=cl('./iris-train.csv')
trainloader = DataLoader(dataset=trainset, batch_size=BATCH, shuffle=True) #pin_memory=true if GPU
testset = cl('./iris-test.csv')
testloader = DataLoader(dataset=testset, batch_size=BATCH, shuffle=True) #pin_memory=true if GPU

#hyperparameters
hl = 5
h2 = 5
lr = 0.01
num_epoch = 65
every = 1

class Net(nn.Module):

    def __init__(self,input_size, output_size, hidden_layers):
        super(Net, self).__init__()
        self.hidden_layers = nn.ModuleList([nn.Linear(input_size, hidden_layers[0])])
        
        # Add a variable number of more hidden layers
        layer_sizes = zip(hidden_layers[:-1], hidden_layers[1:])
        self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes])
        
        self.output = nn.Linear(hidden_layers[-1], output_size)

        self.dropout = nn.Dropout(p=0.0)

    def forward(self, x):
        for self.fc in self.hidden_layers:
            x = self.dropout(F.relu(self.fc(x)))
            # x = self.dropout(x)
        x = (self.output(x))
        return x

net = Net(4,3,[5,5])

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=lr)

#Train
train_losses, test_losses = [], []
for epoch in range(num_epoch):
    net.train
    show = True
    running_loss = 0
    for X, Y in trainloader:
        X = X.float()
        Y = Y.long()  

        optimizer.zero_grad()
        out = net.forward(X)
        loss = criterion(out, Y)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
    
        if (epoch+1) % every == 0:
            test_loss = 0
            accuracy = 0
            with torch.no_grad():
                net.eval()
                for X, Y in testloader:
                    X = X.float()
                    Y = Y.long()
                    log_ps = net.forward(X)
                    test_loss += criterion(log_ps, Y).item()
                    

                    ps = torch.exp(log_ps)
                    equality = (Y.data == ps.max(1)[1])
                    accuracy += equality.type_as(torch.FloatTensor()).mean()

            if show:#To show logs once per epoch
                test_losses.append(test_loss/len(testloader))
                train_losses.append(running_loss/every)
                print("Epoch: {}/{}.. ".format(epoch+1, num_epoch),
                    "Training Loss: {:.3f}.. ".format(running_loss/every),
                    "Test Loss: {:.3f}.. ".format(test_losses[-1]),
                    "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
                show=False
                running_loss = 0
            net.train()

import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
plt.show()

So my main question is : I’am I corectly calculating the train and test loss ?
And if you have any suggestions or if i did something wrong, it will be nice to tell me.
Thanks for your time !

I’m not sure, how your CustomLoader2 works, but if it’s some kind of Dataset, it should be alright.

Some small notes:

  • usually you don’t need to set shuffle=True for your test DataLoader, since the accuracy and loss won’t depend on the order of your samples. It won’t break anything, but you could just remove it.
  • you forgot the parentheses in model.train(), so currently your model might get stuck in model.eval(), which will disable the dropout layers
  • it’s recommended to call the model directly instead of the forward method, so that all hooks will be properly registered. Probably not important in your current script, but anyway I would stick to it.
  • if I’m not mistaken you don’t need to call torch.exp(log_ps) to get the predictions, since your model outputs raw logits.
1 Like

Thanks a lot for you answer, I will definitely take what you said into considiration.
Here is my CustomLoader2 class, i did that to easily manipulate my dataset :

class CustomLoader2(Dataset):
    def __init__(self, chemin):#The path to the file
        xy = pd.read_csv(chemin)
        xy.loc[xy['species']=='Iris-setosa', 'species']=0
        xy.loc[xy['species']=='Iris-versicolor', 'species']=1
        xy.loc[xy['species']=='Iris-virginica', 'species']=2
        x = xy.iloc[:, 0:-1]
        y = xy.iloc[:, -1]
        self.len = x.shape[0]
        self.x_data = torch.tensor(x.values)
        self.y_data = torch.tensor(y.values)

    def __getitem__(self, index):
        return self.x_data[index], self.y_data[index]
    
    def __len__(self):
        return self.len

It looks alright.
Maybe one point: I assume x.values returns a numpy array. If that’s the case, I would rather use torch.from_numpy(x.values). :wink:

Alright !
Thanks for your help :blush: