Can I create a vector or array of Torch tensors with differing dimensions as long as they have the same data type? I want to use pointer arithmetic to access the tensors.
std::vector<Tensor> is a standard thing and also used in PyTorch itself for representing lists of tensors. When passing stuff around, the TensorList, an ArrayRef to Tensor often comes in handy.
So if I push a tensor of dim 2x20x200 into the std::vector vector then later I push a tensor of dim 999x999x999 into it, there will be no issues?
After a few 4GB tensors (assuming 32 bit values), you might run out of memory, but other than that, yes, it works.