Hi
I’m currently doing a multi label classification problem
As far as I know using BCELogitsLoss() function is used as a loss function for such type of problems
I have images and one hot vectors and the image ids as input
Example
imagetensor => tensor([[[-1.2406, -1.6744, -1.8826, …, -1.9694, -1.9347, -1.8826],
[-1.8306, -1.9694, -1.9867, …, -1.9867, -1.9867, -1.9867],
[-1.9867, -1.9867, -1.9867, …, -1.9867, -1.9867, -1.9867],
…,
[-1.9867, -1.9867, -1.9867, …, -1.9867, -1.9867, -1.9867],
[-1.9867, -1.9867, -1.9867, …, -1.9867, -1.9867, -1.9867],
[-1.7264, -1.8653, -1.8826, …, -1.9867, -1.9867, -1.9867]]])
one hot vector => [1. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
This is the evaluation function
loss_fn = nn.BCEWithLogitsLoss()
opt = optim.SGD(resnet.parameters(), lr = 0.01)
def evaluation(dataloader, model):
# model added as argument - 1 change from lenet
total, correct = 0,0
for data in dataloader:
inputs, labels,ids = data
inputs, labels, ids = inputs.to(device), labels.to(device), ids.to(device)
# inputs are put in the gpu
outputs = model(inputs)
print(outputs)
# from net to model - 2 change from lenet
_, pred = torch.max(outputs.data,1)
print(pred)
total += labels.size(0)
correct +=(pred==labels).sum().item()
return 100 * correct/total
Training loop
loss_epoch_arr = []
max_epochs = 10
min_loss = 1000
n_iters = np.ceil(len(trainset)/batch_size)
for epoch in range(max_epochs):
for i, data in enumerate(trainloader, 0):
inputs, labels , imgindex = data
inputs, labels, imgindex = inputs.to(device), labels.to(device), imgindex.to(device)
#print(inputs.shape)
opt.zero_grad()
outputs = resnet(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
opt.step()
if min_loss> loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(resnet.state_dict())
print('Min loss %0.2f' % min_loss)
if i%100 == 0:
print('Iteration : %d/%d, Loss : %0.2f' % (i, n_iters, loss.item()))
del inputs, labels, imgindex, outputs
torch.cuda.empty_cache()
loss_epoch_arr.append(loss.item())
print('Epoch: %d/%d, Train acc : %0.2f, Test acc : %0.2f' % (epoch, max_epochs, evaluation(trainloader,resnet), evaluation(testloader,resnet)) )
plt.plot(loss_epoch_arr)
plt.show()
I’m using resnet pretrained with imagenet weights and have modified the first layer to accept grey scale images and the number of output classes to 15 in the last layer.
On running the training loop, the print statement calling the evaluation(trainloader,resnet) causes an error, on running with the following runtime settings
CPU -> RuntimeError: The size of tensor a (250) must match the size of tensor b (15) at non-singleton dimension 1, the batch size I’m using is 250, 15 is number of classes
GPU -> 9 **
** 10 inputs, labels , imgindex = data
—> 11 inputs, labels, imgindex = inputs.to(device), labels.to(device), imgindex.to(device)
AttributeError: ‘tuple’ object has no attribute 'to’
I’m new to multi label classification…So any help would be appreciated.