Correct way to find corresponding predictions

Hi,

I want to save model predictions for the inputs which are test dataset and its shuffled version, and later use those predictions for calculating accuracy. Because of the inputs, there are 2 predictions for each sample. For calculating accuracy, I want to find those 2 predictions for each sample and getting average over them. How I can do this? I tried to find the same labels and then considering corresponding outputs for them. But achieved accuracy does not make sense.

Without shuffling, I can find them but shuffling is important, and it cannot be removed.

You could write a custom Dataset and return the index together with the data and target and store the index e.g. in a list.
Afterwards you could use these indices to map your samples using your condition.

1 Like

Thank you very much for your help, @ptrblck.

I saved predictions, targets, indices in the lists. I used for loops to find equal indices and their corresponding predictions, but it is so slow, I wonder is there any ways to speed up finding equal indices?

for i in range(len(prediction_test)):
idx = idx_test[i]                   #idx:torch.Size([64])
target1 = target_test[i]            #target1:torch.Size([64])
prediction1 = prediction_test[i]    #prediction1:torch.Size([64, 10])

for j in range(len(idx)):
idx1 = idx[j]

for m in range(len(prediction_test)):
idx2 = idx_testsh[m]
prediction2 = prediction_testsh[m]
for n in range(len(idx2)):
if idx2[n] == idx1:
prediction22 = prediction2[n,:]
break

target = target1[j]
prediction11 = prediction1[j,:]

I’m not completely sure, how all tensors are defined in your code, but I think you could use torch.unique to create a list with all masks and loop once over these masks:

masks = [u==idx for u in idx.unique()]

Would that work for you?

I followed https://discuss.pytorch.org/t/typeerror-dataset-object-does-not-support-indexing/72799/19 for creating the custom datasets, target, and indices. I considered indices for the test dataset and its shuffled version separately, which are called idx_test, idx_testsh. For using torch.unique I put all these indices in one list, but it gives this error:

TypeError: _unique(): argument 'input' (position 1) must be Tensor, not list

How can I apply the output of torch.unique over predictions?

You would have to pass a tensor not a list to torch.unique via e.g. torch.tensor(), or torch.stack/cat.

I got that, but the problem is how I can convert a list of tensors to tensor? I tried

idxt = [*idx_test, *idx_testsh]
index = torch.stack(idxt)

it showed this error, (the last batch has size 16 while the rest are 64)

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 64 and 16 in dimension 1 ...

I also tried

idxt = [*idx_test, *idx_testsh]
i = torch.tensor(idxt)

it gives the error

ValueError: only one element tensors can be converted to Python scalars

torch.cat should work, if you have different batch sizes, but besides that equally sized tensors:

l = [torch.randn(16, 2), torch.randn(64, 2)]
out = torch.cat(l)

torch.cat worked, thank you @ptrblck.

Now I have list of tensors with length 314, and tensors shape is 64x10 except the last one (16x10).

prediction = [*prediction_test, *prediction_testsh]

idxt = [*idx_test, *idx_testsh]
i = torch.cat(idxt)
mask = [u==i for u in torch.unique(i)]

then multiplication between mask and prediction should give the predictions for equal indices? how multiplication between these list of tensors can be done? I tried to convert list of tensors to tensor and then torch.mul(a,b) but that gave error for not matching the size between tensor a and tensor b.

Could you try to loop over these masks and use them to index your predictions?

I tried the loop

prediction_total = mask[i].float().view(-1, 1) * prediction

because of dimension mismatch, I used .view, but the answer is not correct:

>>>print(prediction)
tensor([[3.0603e-02, 5.6551e-02, 9.2520e-02,  ..., 4.0616e-02, 1.6195e-01,
4.0573e-02],
[2.9348e-02, 1.0917e-02, 1.8414e-04,  ..., 3.0553e-06, 9.5730e-01,
2.0963e-03],
[5.0212e-02, 9.1508e-03, 1.2127e-03,  ..., 5.4165e-05, 9.3203e-01,
5.9912e-03],
...,
[1.9600e-02, 1.7821e-01, 2.8131e-03,  ..., 7.4272e-03, 2.5914e-02,
7.5438e-01],
[1.4370e-01, 4.3823e-02, 1.7975e-03,  ..., 2.7101e-03, 1.4415e-01,
6.6198e-01],
[1.7615e-02, 2.3825e-02, 6.6172e-04,  ..., 1.4958e-02, 2.7619e-02,
9.1287e-01]], device='cuda:0')