Combine 2 channels of an image

Hello! I have a 2 channel images, but the 2 channels come in different files, so I have 2 tensors of size 64 x 64 each. How can I combine them in a single tensor of size 2 x 64 x 64? I found some ways with view, but I am not totally sure if the resizing is done the way I want (it goes from 128 x 64 to 2 x 64 x 64).

You could call torch.stack on these tensors:

x1 = torch.randn(64, 64)
x2 = torch.randn(64, 64)
x = torch.stack((x1, x2))
1 Like

Hi @ptrblck,

I would be glad for your approach of stacking one-hot tensors to have the same dimension. For example. I have x1 = torch.randint(0, 2, size=(1, 3, 3)) ->
tensor([[[0, 1, 1],
[0, 1, 0],
[0, 1, 1]]])
and x2 = tensor([[[1, 0, 0],
[1, 0, 1],
[1, 0, 0]]])
I want to have x3 = tensor([[[1, 1, 1],
[1, 1, 1],
[1, 1, 1]]]).

And how can I have x2 using pytorch function?

I’m not sure if I understand the use case correctly, but you could use OR instead of any stacking operation:

x1 = torch.tensor([[[0, 1, 1],
                    [0, 1, 0],
                    [0, 1, 1]]])
x2 = torch.tensor([[[1, 0, 0],
                    [1, 0, 1],
                    [1, 0, 0]]])
x3 = x1 | x2
print(x3)
> tensor([[[1, 1, 1],
           [1, 1, 1],
           [1, 1, 1]]])
1 Like

Thanks for the suggestion and that what I need for x3. For x2, torch.where(x1 == 0, 1, x1) worked.

I got this error RuntimeError: “bitwise_or_cpu” not implemented for ‘Float’. How can I fix this?

Which PyTorch version are you using? You might need to update it, if you are using an older version.

The installed version is torch 1.7.0+cpu

I cannot reproduce this issue on 1.7.0:

>>> import torch
>>> x1 = torch.tensor([[[0, 1, 1],
...                     [0, 1, 0],
...                     [0, 1, 1]]])
>>> x2 = torch.tensor([[[1, 0, 0],
...                     [1, 0, 1],
...                     [1, 0, 0]]])
>>> x3 = x1 | x2
>>> print(x3)
tensor([[[1, 1, 1],
         [1, 1, 1],
         [1, 1, 1]]])
>>> print(torch.__version__)
1.7.0

II is because the dtype is torch.int64. If you set the dtype to torch.float, the error is reproduced

You are right. I didn’t run into this error, as I was using your originally posted tensors.
For these bool operations, I would transform the tensors via .long() or byte() to get the right result or use torch.logical_or(x1, x2) alternatively.

2 Likes

Thanks so much. The two approaches worked although torch.logical_or gives boolean.