Hi all,

My question is about the image segmentation task.

I have a tensor with the size of `(batch_size, 150, height, width)`

and the second number(`150`

) corresponds to the number of classes.

Now I want to merge those classes into `25`

classes by summing their probabilities.

First of all, is this procedure correct? It seems it should be, for example I want to merge `river`

and `lake`

into one class.

Note: I have the source and target indexes to be merged. For example:

```
target=[1, 2]
source= [[3,4],
[5,6]]
# here for making things clear, I use a 1-D matrix
input = [1, 3, 4, 2, 5, 7, 6]
expected_output = [1, 1, 1, 2, 2, 7, 2]
```

My question is, what is the optimal way to do this process?

My attempt: First, I tried to make a one hot encoded vector of each source index should be converted to target. Then use these vectors as masks then using linear multiplication to get aggregated value. And this process happens at the same time of all pixels.

By the way, I am still unsure what is the efficient way to achieve the goal using PyTorch available functions.