Populating a 4D tensor from 3D tensors using stack

I am mostly certain this is something trivial, yet I can’t find much on Google about it.
I’m populating 3D tensors from BGR data, which I need to place in a 4D tensor to transform into a batch for evaluation/testing purposes.
I know how to get my 3D tensor:

img   = Image.open(file)
in_t  = self.img_tf(img).cuda(non_blocking=True).float()

And I know the size of the batch:

 def make_batch(self, faces):
        data  = []
        for (uuid, hwid, file) in faces:
            img   = Image.open(file)
            in_t  = self.img_tf(img).cuda(non_blocking=True).float()
            print(in_t.shape)
            data.append(in_t)
        self.input = torch.tensor(data)
        return self.input

My problem of course is how to stack the in_t in self.input so that it makes sense to use for propagation right afterwards:

 def run_batch(self):
        with torch.no_grad():
            output = self.net(self.input)
            pred   = output.argmax(dim=1, keepdim=True)

Placing them in a list dosn’t work, passing them one at a time by unsqueezing makes no sense since I know I have a batch for evaluation, so AFAIK I need to stack the BGR 3 channel tensor as a 4D tensor, with 1st dimension being the batch?

Have you looked into the function torch.stack()?

Yes, that’s what I’ve been trying to use, but I keep getting the dimensions mixed up.
I’ve found another post on here asking the same question, and his solution was to just setup a 4D tensor and populate it directly:

        data  = torch.zeros([len(faces), 3, 224, 224])
        self.face_map = []
        i = 0
        for (uuid, hwid, file) in faces:
            img   = Image.open(file)
            in_t  = self.img_tf(img)
            data[i] = in_t
            i += 1

Like so. I imagine the “proper” approach would be to use stack instead?

This approach is fine. Alternatively, you could append the tensors in a list and use torch.stack on the list to get the same result.

Thanks @ptrblck I’ll stick to using that one then. I’ve tried to use torch.stack but I seem to be getting the dimensions wrong, because every time I get an error/exception.
What would be a minimal example of how to stack 3D (BGR 3, 224, 224) into a 4D batch tensor?

This would be a simple example:

x = []
for _ in range(10):
    x.append(torch.randn(3, 224, 224))
x = torch.stack(x)
print(x.shape)
> torch.Size([10, 3, 224, 224])
2 Likes

Aha! Now I see what I did wrong. Thanks a lot, this is actually cleaner code.