I have n points as tensor(n, 2), the second dimension is x and y coordinate on an image, I want to convert them to tensor(n, W, H), the W and H are x and y coordinate on the image.

How can I do it **with no for loop?**

thanks

I have n points as tensor(n, 2), the second dimension is x and y coordinate on an image, I want to convert them to tensor(n, W, H), the W and H are x and y coordinate on the image.

How can I do it **with no for loop?**

thanks

If you want to set a specific value for these coordinates into the image, this code should work:

```
img = torch.zeros(4, 5, 5)
x = torch.randint(0, 5, (4, 2))
img[torch.arange(img.size(0)), x[:, 0], x[:, 1]] = 1.
print(img)
```

If the points tensor is [B, N, 2], how to convert it to [B, H, W] without a loop?

Iâ€™m unsure what your exact use case is so could you describe it in more details, please?

I have the projected points, [B, N, 2](batch size, points numbers, x/y), and I want to convert them to images [B, H, W]. First, I set images = torch.zeros([B,H,W]). Then, images[points]=1. The points are like an index, for these indexed pixels, values=1, others are 0.

Indexing should work:

```
B, N = 3, 4
H, W = 5, 5
x = torch.zeros(B, H, W)
idx = torch.randint(0, H, (B, N, 2))
x[torch.arange(B).unsqueeze(1), idx[:, :, 0], idx[:, :, 1]] = 1.
```

1 Like

Thank you very much!

I tested it. After several iterations, " x[torch.arange(B).unsqueeze(1), idx[:, :, 0], idx[:, :, 1]] = 1." causes an error: â€śCUDA error: device-side assert triggeredâ€ť. My environment is Cuda 11.4, torch 1.12.1. And using os.environ[â€śCUDA_LAUNCH_BLOCKINGâ€ť] = â€ś1â€ť has the same error log.

I guess you might be running into an indexing error and the stacktrace with blocking launches would also show it. Check the shape of `x`

as well as the values in `idx`

and make sure they are still valid after a few iterations.

Yes, itâ€™s because the idx values are out of bounds. Thank you!