Batch indexing rgb images with batch pixel values

Hi,

I have got a batch of rgb tensor of shape BxHxWxC and batch of pixel locations B x index x 2. (Batch of us and vs)

I can’t batch index the rgb tensor right away often faced with cuda_allocation_conf error.

Is there a way this can be done?

Could you post a minimal, executable code snippet reproducing the issue, please?

Hi,

Assuming a tensor of RGBD images,
batch_image = torch.rand((5, 4, 200, 200)) of format (B, 4, H, W)
and its pixel location uvs = torch.rand(5, 8, 2) of format (B, us, vs)

here,
I have to extract the pixel vector of the image using uv indices from rgbd image. Each pixel location extracts a vector → [R, G, B, D]

If I can think of output shape, it should be of shape (5, 8, 4) as each batch has 8 pixel locations to be queried.

I have written a loop as follows, which is computationally slower,

as per the code above, I iterate over the batch of images to extract the descriptors, which are nothing but the same RGBD vector yielding the tensor of shape → (5, 8, 4)

I tried to index the descriptors as follows,
batch_descriptors_a = batch_image_a[batch_matches_a[:, 0].long(), :, batch_matches_a[:, 1].long(), batch_matches_a[:, 2].long()]

which brings up this error,
PYTORCH_CUDA_ALLOC_CONF error.
I use a 24GB VRAM Linux machine I have an ample amount of memory still available.

and in windows 11 WSL2, I get BSD.

@ptrblck

I have shared the code.

Unfortunately, you didn’t share the code but have posted an image, which I would need to use to rewrite the code and which also doesn’t seem to be executable.
You can post code snippets by wrapping them into three backticks ```.

Hi,

Please excuse me.

from typing import List, Tuple

import torch


@torch.jit.script
def _pick_descriptors(batch_image_a: torch.Tensor,
                      batch_image_b: torch.Tensor,
                      batch_matches_a: torch.Tensor,
                      batch_matches_b: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
    descriptors_a: List[torch.Tensor] = []
    descriptors_b: List[torch.Tensor] = []
    for image_a, image_b, matches_a, matches_b in zip(batch_image_a, batch_image_b, batch_matches_a, batch_matches_b):
        descriptors_a.append(image_a[:, matches_a[:, 0].long(), matches_a[:, 1].long()])
        descriptors_b.append(image_b[:, matches_b[:, 0].long(), matches_b[:, 1].long()])

    batch_descriptors_a = torch.stack(descriptors_a).permute(0, 2, 1)
    batch_descriptors_b = torch.stack(descriptors_b).permute(0, 2, 1)

    return batch_descriptors_a, batch_descriptors_b


if __name__ == "__main__":
    image_a = torch.rand((5, 4, 255, 255), device=torch.device("cuda"))
    image_b = torch.rand((5, 4, 255, 255), device=torch.device("cuda"))

    matches_a = torch.rand((5, 8, 2), device=torch.device("cuda")).clamp(0, 254)  # clipped for valid pixel locations
    matches_b = torch.rand((5, 8, 2), device=torch.device("cuda")).clamp(0, 254)

    # descriptors_a = _pick_descriptors(image_a, image_b, matches_a, matches_b)
    descriptors = image_a[matches_a[:, 0].long(), :, matches_a[:, 1].long(), matches_a[:, 2].long()]  # Error is generated here and does not generate desired shape

Hello @ptrblck,

I am waiting for your assistance.

Thanks for sharing the code. Unfortunately, I cannot reproduce any error using the code yet. Could you post more information about your setup (i.e. which PyTorch version etc.) and update to the latest PyTorch release to check if you would still see the same issue?