Please, I am New to Pytorch and trying my hands on it’s capability so I am trying to train a simple linear regression on the popular Boston Datasets.
This is my code:
from sklearn.datasets import load_boston
import torch
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn.functional as F
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
boston = load_boston()
data = pd.DataFrame(boston['data'], columns=boston['feature_names'])
target = pd.Series(boston['target'])
data.shape, target.shape
dataA = Variable(torch.from_numpy(data.values).float())
y = Variable(torch.from_numpy(target.values).float())
linear = nn.Linear(data.shape[1], 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(linear.parameters(), lr=0.01)
loss2 = []
for i in range(5):
optimizer.zero_grad()
outputs = linear(dataA)
loss = criterion(outputs, y)
loss2.append(loss.data[0])
loss.backward()
optimizer.step()
plt.plot(range(5), loss2)
plt.show()
print(loss2)
The issue is that the loss keeps going up, can anyone please help look through and help find out what can be done differently