Linear Regression Model--Problems with Loss

I am working on a linear model to make predictions. The problem I am having is that my loss is higher than my actual values to be predicted. To see if it is a problem with the data I have printed at several spots throughout trying to find if there are any disparities in the data but I find none. I have even tried leaking the expected values, making the inputs and labels the same. Loss is still extremely high. Hoping for some guidance. Below is the code for my model.

train_target = th.tensor(df["Label"].values.astype(np.float32))
train = th.tensor(df.drop("Label",axis=1).values.astype(np.float32))
train_data = TensorDataset(train, train_target)
train_dl = DataLoader(train_data, batch_size = batch_size, shuffle=True)

print(train_target.shape)
class LinReg(nn.Module):
    # Init layers
    def __init__(
        self,
        in_dim: int = 13,
        out_dim: int = 1,
        latent_base: int = 128,
        dropout: float = 0.5):

        super(LinReg, self).__init__()
        self.net = nn.Sequential(
            nn.Dropout(dropout),
            nn.Linear(in_dim, latent_base),
            nn.PReLU(),
            nn.Dropout(dropout),
            nn.Linear(latent_base, 1)
        )
    
    # Forward
    def forward(self, x: th.Tensor) -> th.Tensor:
        return self.net(x)


model = LinReg()
model.cuda()

optimizer = th.optim.RMSprop(model.parameters(), lr = 1e-3)
criterion = nn.L1Loss()

for i in range(epochs):
    for inputs, labels in train_dl:
        y_pred = model(inputs.to(device))
        loss = criterion(y_pred, labels.to(device))
        loss.backward()
        optimizer.zero_grad()
        optimizer.step()
        if i == (epochs-1):
            pred =  y_pred
            y = labels
    if i%1==0:
        print('Epoch {}, Loss: {}'.format(i, loss.item()))

I’m not sure that such use of dropout is correct. And use of L1Loss instead of MSELoss may aggravate the issue.

optimizer.zero_grad()
loss.backward()
optimizer.step()

I believe this may fix your issue. You need to clear all the gradients before you perform a backward pass again. Also as Alex mentioned, using L1Loss() may exacerbate the issue in some cases.

1 Like

Try as suggested by @pchandrasekaran

Also, how’s your model behaving when you try to run it on CPU? In my personal experience running some models on GPU resulted in exploding loss sometimes (the reason for which is still unclear to me). Try running it on CPU once.