I’ve started to work with pytorch a few weeks ago.(No prior knowledge of ML)
I want to build an image classifier that detects wheather an image is a cat or a dog.
I have labels in the form of:
tensor([[0., 1.],
[0., 1.],
[1., 0.],
...,
[0., 1.],
[0., 1.],
[0., 1.]])
This is my network class:
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(50*50, 64)
self.fc2 = nn.Linear(64, 64)
self.fc3 = nn.Linear(64, 64)
self.fc4 = nn.Linear(64, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return F.log_softmax(x, dim=1)
net = Net()
This is my training loop using optim.Adam
and criterion = nn.CrossEntropyLoss()
:
EPOCHS=6
for epoch in range(EPOCHS):
for key in range(24946):
X = data[key]
X = torch.unsqueeze (torch.flatten (X), dim = 0)
y =out[key]
net.zero_grad()
output = net(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print(loss)
The shape of the labels is torch.Size([2])
The shape of the inputs to the network is torch.Size([1, 2500])
However when I try to train it give sme the following error:
ValueError: Expected input batch_size (1) to match target batch_size (2).
The desired labels are a tensors of size 2 (eg : tensor([0., 1.])
) and the output of my netword is also of size 2. What’s the reason for this? Any help is appreciated.