Hi Gian,
I am trying to implement your idea and I am stuck:(
for X, y in loader:
X, y = X.to(device), y.to(device)
item_count = X.shape[0]
outputs = model(X)
print("outputs shape ", outputs.shape)
outputs_0 = my_act_func0(outputs)
outputs_1 = my_act_func1(outputs)
outputs_2 = my_act_func2(outputs)
outputs_3 = my_act_func3(outputs)
outputs_4 = my_act_func4(outputs)
outputs_5 = my_act_func5(outputs)
outputs_6 = my_act_func6(outputs)
outputs_7 = my_act_func7(outputs)
outputs_8 = my_act_func8(outputs)
outputs_9 = my_act_func9(outputs)
print("outputs_0 shape ", outputs_0.shape)
all_outputs = torch.zeros([10,item_count,10])
all_outputs[0] = outputs_0
all_outputs[1] = outputs_1
all_outputs[2] = outputs_2
all_outputs[3] = outputs_3
all_outputs[4] = outputs_4
all_outputs[5] = outputs_5
all_outputs[6] = outputs_6
all_outputs[7] = outputs_7
all_outputs[8] = outputs_8
all_outputs[9] = outputs_9
print("all_outputs shape ", all_outputs.shape)
#print("output shape ", outputs.shape)
#print("y ", y.shape)
one_hot = torch.nn.functional.one_hot(y,num_classes=10)
one_hot = one_hot.to(torch.float32)
print("one hot", one_hot )
print("one hot shape", one_hot.shape )
The output of above code piece is like this:
outputs shape torch.Size([128, 10])
outputs_0 shape torch.Size([128, 10])
all_outputs shape torch.Size([10, 128, 10])
one hot tensor([[1., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.],
…,
[0., 0., 0., …, 0., 0., 1.],
[0., 1., 0., …, 0., 0., 0.],
[0., 0., 0., …, 0., 0., 0.]], device=‘cuda:0’)
one hot shape torch.Size([128, 10])
But what I want is my final output is of shape 128x10
128 is my batch size.
If the label of first data point x_0 in my batch is of label 0 then the first row of the final output should be the output of my_func0(x_0) , if the label of second data point x_1 in my batch is of label 7 then the second row of the final output should be the output of my_func7(x_1), and so on.
I couldn’t manage to implement this based on your suggestion. Would you please help on this?