I am mostly certain this is something trivial, yet I can’t find much on Google about it.
I’m populating 3D tensors from BGR data, which I need to place in a 4D tensor to transform into a batch for evaluation/testing purposes.
I know how to get my 3D tensor:
img = Image.open(file)
in_t = self.img_tf(img).cuda(non_blocking=True).float()
And I know the size of the batch:
def make_batch(self, faces):
data = []
for (uuid, hwid, file) in faces:
img = Image.open(file)
in_t = self.img_tf(img).cuda(non_blocking=True).float()
print(in_t.shape)
data.append(in_t)
self.input = torch.tensor(data)
return self.input
My problem of course is how to stack the in_t
in self.input
so that it makes sense to use for propagation right afterwards:
def run_batch(self):
with torch.no_grad():
output = self.net(self.input)
pred = output.argmax(dim=1, keepdim=True)
Placing them in a list dosn’t work, passing them one at a time by unsqueezing makes no sense since I know I have a batch for evaluation, so AFAIK I need to stack the BGR 3 channel tensor as a 4D tensor, with 1st dimension being the batch?