When I train my model and test it on the test data set, it returns all of the same values. The network is meant to classify objects with 2 float parameters into one of 9 classes. The data is imbalanced so I used a WeightedRandomSampler.
My neural network:
class Net(nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super(Net, self).__init__()
self.linear1 = nn.Linear(D_in, H1)
self.linear2 = nn.Linear(H1, H2)
self.linear4 = nn.Linear(H2, D_out)
def forward(self, x):
x = f.relu(self.linear1(x))
x = f.relu(self.linear2(x))
x = self.linear4(x)
return x
What it returns when not trained and inputted test data:
tensor([[-300.7366, 142.0265, 1203.9167, ..., -366.3187, 930.1630,
-460.1914],
[-118.0432, 55.4708, 467.9875, ..., -142.0467, 360.8930,
-178.8175],
[-165.0909, 77.8327, 658.5565, ..., -200.1997, 508.4938,
-251.7095],
...,
[-166.6815, 78.6092, 665.3017, ..., -202.2803, 513.7719,
-254.2981],
[-197.1317, 93.1154, 789.1254, ..., -240.1015, 609.7628,
-301.6740],
[-130.0121, 61.4217, 520.3043, ..., -158.2938, 402.0972,
-198.9381]], grad_fn=<AddmmBackward0>)
Training:
for epoch in range(n_epochs):
model.train(True)
for i, xy in enumerate(trainloader):
x, y = xy
optimizer.zero_grad()
z = model(x)
loss = Loss(z, y)
loss.backward()
optimizer.step()
loss_list.append(loss.data)
print("Epoch is", epoch)
What it returns when trained and inputted test data:
tensor([[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139],
[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139],
[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139],
...,
[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139],
[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139],
[-0.0234, 0.0055, 0.0225, ..., -0.0304, -0.0017, 0.0139]],
grad_fn=<AddmmBackward0>)
Any help or advice would be appreciated.
Thanks,
Rishav