I inherited some code that normalizes bounding boxes coordinates (BATCH_SIZE x NUM_BOUNDING_BOXES x FOUR_COORDINATES) based on batched image sizes (BATCH_SIZE x X_Y_DIMS). The problem is that when the batchsize (B in the code below) is higher than 1 the code no longer functions properly, either not doing the correct operation or failing due to dimension mismatches for B > 2.

In the snippet below tensor_a represents the batched bounding boxes and tensor_b the image dimensions, NOTE: they may not all be the same size (it was simpler to check the code this way). How could I do this operation for any batch size?

```
B = 1
tensor_a = torch.ones([B, 36, 4])
tensor_b = torch.ones([B, 2])
tensor_b[:,0] = 4
test_res1 = tensor_a[:, :, (0, 2)] / tensor_b[:, 1]
test_res2 = tensor_a[:, :, (1, 3)] / tensor_b[:, 0]
test_res1, test_res2
```