Linear Regression with polynomials - transformation problem

Hi, I’m new to python, pytorch and DL, so please bear with me.
I’m following Nando de Freitas Oxford youtube lectures and in one of the exercises we need to construct Polynomial Regression. I found an example here Polynomial Regression
Now I’m trying to modify it to my needs, but having issues. I think the problem is that in the function make_features(x) produces tensors x with size (10,2,4) and tensor y_train with size (10) and I need to align them and make the tensor x of only one row, but I don’t know how to do it. I tried transforming it with numpy.reshape() but couldn’t get it to work. It’s be great if someone could help me out, Thanks, Code below

from __future__ import print_function
from itertools import count

import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable

train_data = torch.Tensor([
   [40,  6,  4],
   [44, 10,  4],
   [46, 12,  5],
   [48, 14,  7],
   [52, 16,  9],
   [58, 18, 12],
   [60, 22, 14],
   [68, 24, 20],
   [74, 26, 21],
   [80, 32, 24]])
test_data = torch.Tensor([
    [6, 4],
    [10, 5],
    [4, 8]])

x_train = train_data[:,1:3]
y_train = train_data[:,0]

POLY_DEGREE = 4
input_size = 2
output_size = 1


def make_features(x):
    """Builds features i.e. a matrix with columns [x, x^2, x^3, x^4]."""
    x = x.unsqueeze(1)
    return torch.cat([x ** i for i in range(1, POLY_DEGREE+1)], 1)



def poly_desc(W, b):
    """Creates a string description of a polynomial."""
    result = 'y = '
    for i, w in enumerate(W):
        result += '{:+.2f} x^{} '.format(w, len(W) - i)
    result += '{:+.2f}'.format(b[0])
    return result


def get_batch():
    """Builds a batch i.e. (x, f(x)) pair."""
    x = make_features(x_train)

    return Variable(x), Variable(y_train)


# Define model
fc = torch.nn.Linear(input_size, output_size)

for batch_idx in range(1000):
    # Get data
    batch_x, batch_y = get_batch()

    # Reset gradients
    fc.zero_grad()

    # Forward pass
    output = F.smooth_l1_loss(fc(batch_x), batch_y)
    loss = output.data[0]

    # Backward pass
    output.backward()

    # Apply gradients
    for param in fc.parameters():
        param.data.add_(-0.1 * param.grad.data)

    # Stop criterion
    if loss < 1e-3:
        break

print('Loss: {:.6f} after {} batches'.format(loss, batch_idx))
print('==> Learned function:\t' + poly_desc(fc.weight.data.view(-1), fc.bias.data))
# print('==> Actual function:\t' + poly_desc(W_target.view(-1), b_target))
1 Like

I changed it a bit and now I don’t get any errors, but neither any values for loss or predicted values. I tried to replicate the case with sklearn and all is ok. I think I’m missing something simple, but just can’t see it. Any help would be appreciated.

Code:

import sklearn.linear_model as lm
from sklearn.preprocessing import PolynomialFeatures
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable


train_data = torch.Tensor([
   [40,  6,  4],
   [44, 10,  4],
   [46, 12,  5],
   [48, 14,  7],
   [52, 16,  9],
   [58, 18, 12],
   [60, 22, 14],
   [68, 24, 20],
   [74, 26, 21],
   [80, 32, 24]])
test_data = torch.Tensor([
    [6, 4],
    [10, 5],
    [4, 8]])

x_train = train_data[:,1:3]
y_train = train_data[:,0]

POLY_DEGREE = 3
input_size = 2
output_size = 1

poly = PolynomialFeatures(input_size * POLY_DEGREE, include_bias=False)
x_train_poly = poly.fit_transform(x_train.numpy())


class Model(torch.nn.Module):

    def __init__(self):
        super(Model, self).__init__()
        self.fc = torch.nn.Linear(poly.n_output_features_, output_size)
                
    def forward(self, x):
        return self.fc(x)
            
model = Model()    
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)

losses = [] # Added

for i in range(1000):
    optimizer.zero_grad()
    outputs = model(Variable(torch.Tensor(x_train_poly)))
    loss = criterion(outputs, Variable(y_train))
    losses.append(loss.data[0])
    loss.backward()    
    optimizer.step()
    if loss.data[0] < 1e-4:
        break    

print('n_iter', i)
print(loss.data[0])
plt.plot(losses)
plt.show()

and below is the sklearn code that works

regr = lm.LinearRegression()
poly = PolynomialFeatures(4, include_bias=False)
X_poly = poly.fit_transform(x_train.numpy())

regr.fit(x_train.numpy(), y_train.numpy())
pred = regr.predict(test_data.numpy())
pred

Output

array([ 40.32044911, 44.03051972, 43.45981896])

1 Like

I can’t see what’s broken, but I like this nice pedagogical example!

Hope you get it working!

Im keep getting loss:nan for every loss.data[0]