I tried to implement image classification of building images of my neighborhood. There two classes of images: panel and modern. Images of shape:
torch.Size([36, 3, 150, 200])
A shape of labels:
torch.Size([36, 2])
If you need I can provide a link to download images.
My code is:
import torch
import matplotlib.pyplot as plt
import numpy as np
from os import listdir
from sklearn.model_selection import train_test_split
def loadImages(path):
imagesList = listdir(path)
loadedImages = []
for image in imagesList:
loadedImages.append(plt.imread(path + image))
return np.array(loadedImages)
panel = loadImages('./photo_small/panel/') / 255
modern = loadImages('./photo_small/modern/') / 255
photo_up = np.concatenate((panel, modern), axis=0)
photo = photo_up.swapaxes(3, 1).swapaxes(3,2)
label_first = np.concatenate((np.zeros(20), np.ones(20)), axis=0)
label_second = np.concatenate((np.ones(20), np.zeros(20)), axis=0)
label_almost = np.vstack((label_first, label_second))
label = label_almost.swapaxes(1,0)
X_train, X_test, y_train, y_test = train_test_split(photo, label, test_size=0.1, random_state=42)
X_train_torch = torch.from_numpy(X_train).float()
X_test_torch = torch.from_numpy(X_test).float()
y_train_torch = torch.from_numpy(y_train).float()
y_test_torch = torch.from_numpy(y_test).float()
class Flatten(torch.nn.Module):
def forward(self, x):
return x.view(x.size()[0], -1)
model = torch.nn.Sequential(
torch.nn.Conv2d(3, 64, kernel_size=(3, 3)),
torch.nn.ReLU(),
torch.nn.Conv2d(64, 64, kernel_size=(3, 3)),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=(2, 2)),
torch.nn.Dropout(0.25),
Flatten(),
torch.nn.Linear(457856, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 2),
torch.nn.Softmax(dim=0)
)
loss_fn = torch.nn.CrossEntropyLoss()
learning_rate = 0.01
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(5000):
y_pred = model(X_train_torch)
loss = loss_fn(y_pred, y_train_torch)
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
When I execute the code I have: âRuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 âtargetââ.
Traceback:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-38-c8423f8403d7> in <module>
3 for t in range(5000):
4 y_pred = model(X_train_torch)
----> 5 loss = loss_fn(y_pred, y_train_torch)
6 print(t, loss.item())
7 optimizer.zero_grad()
c:\program files\python36\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
c:\program files\python36\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
860 def forward(self, input, target):
861 return F.cross_entropy(input, target, weight=self.weight,
--> 862 ignore_index=self.ignore_index, reduction=self.reduction)
863
864
c:\program files\python36\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
1548 if size_average is not None or reduce is not None:
1549 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 1550 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
1551
1552
c:\program files\python36\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1405 .format(input.size(0), target.size(0)))
1406 if dim == 2:
-> 1407 return torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
1408 elif dim == 4:
1409 return torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target'".
When I change tensor to Long the following error appears: âRuntimeError: thnn_conv2d_forward is not implemented for type torch.LongTensorâ.
However, if I change loss function to L1loss model is starting to work but a loss is not decreasing which is not the main problem for me right now.
Maybe I messed up with labels (y_train_torch).
Any help would be greatly appreciated.
Long life to Pytorch!
Recent update: tried Cifar architecture, the problem is the same with LongTensor.
There are two doubts that I have:
- Flatten code is incorrect.
- Labels are incorrect. Because in Cifar example labels of 1D: 0-9. Tried to do the same with my nn, PyTorch expect 36 by 2 which I had.
You can laugh but I continue to struggle:
y_pred or output in other word doesntâ sum to 1:
tensor([[0.0290, 0.0283],
[0.0276, 0.0276],
[0.0281, 0.0279],
[0.0285, 0.0275],
[0.0276, 0.0276],
[0.0277, 0.0282],
[0.0274, 0.0274],
[0.0277, 0.0281],
[0.0276, 0.0280],
[0.0278, 0.0276],
[0.0278, 0.0277],
[0.0276, 0.0277],
[0.0275, 0.0279],
[0.0272, 0.0271],
[0.0273, 0.0281],
[0.0280, 0.0271],
[0.0277, 0.0276],
[0.0277, 0.0280],
[0.0289, 0.0281],
[0.0283, 0.0281],
[0.0282, 0.0269],
[0.0277, 0.0274],
[0.0274, 0.0277],
[0.0280, 0.0276],
[0.0286, 0.0277],
[0.0268, 0.0279],
[0.0275, 0.0280],
[0.0277, 0.0285],
[0.0276, 0.0284],
[0.0273, 0.0279],
[0.0273, 0.0284],
[0.0281, 0.0274],
[0.0279, 0.0279],
[0.0276, 0.0276],
[0.0274, 0.0276],
[0.0278, 0.0276]], grad_fn=<SoftmaxBackward>)
What is the problem? torch.nn.Softmax(dim=0) seems to be right as I want sum by rows.
Why it is not giving me one?
Recent update:
Correct is dim=1 in Softmax.