I’m trying to train a basic conv net using pytorch. This is what my training loop looks like:
for epoch in range(20): # loop over the dataset multiple times
model.train()
print("================ ", epoch)
running_loss = 0.0
for i, data in enumerate(trainloader):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 20 == 0: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
Now for some reason, I always get the same loss:
1, 1] loss: 0.000
[1, 21] loss: 0.028
[1, 41] loss: 0.036
[1, 61] loss: 0.050
[1, 81] loss: 0.028
[1, 101] loss: 0.042
[1, 121] loss: 0.034
[1, 141] loss: 0.030
[1, 161] loss: 0.030
[1, 181] loss: 0.022
[1, 201] loss: 0.036
================ 1
[2, 1] loss: 0.002
[2, 21] loss: 0.028
[2, 41] loss: 0.036
[2, 61] loss: 0.050
[2, 81] loss: 0.028
[2, 101] loss: 0.042
[2, 121] loss: 0.034
[2, 141] loss: 0.030
[2, 161] loss: 0.030
[2, 181] loss: 0.022
[2, 201] loss: 0.036
Why is my model not training? Am I missing something?