Yes, that’s what I’ve been trying to use, but I keep getting the dimensions mixed up.
I’ve found another post on here asking the same question, and his solution was to just setup a 4D tensor and populate it directly:
data = torch.zeros([len(faces), 3, 224, 224])
self.face_map = []
i = 0
for (uuid, hwid, file) in faces:
img = Image.open(file)
in_t = self.img_tf(img)
data[i] = in_t
i += 1
Like so. I imagine the “proper” approach would be to use stack
instead?