Tensors with dimensions known at compile time

The Eigen library allows you to declare Matrix types whose dimensions are fixed, thus avoiding dynamic memory allocation when constructing such objects.

Does PyTorch C++ have something similar? Or must all Tensor’s incur the cost of dynamic memory allocation?

PyTorch doesn’t allow you to directly allocate memory via the Python frontend. However, if you are concerned about the costs to allocate GPU memory, you could initially allocate a large tensor, delete it, and allow PyTorch to reuse this memory as its internal cache. In C++ you would be able to allocate your own objects and pass these to PyTorch via torch::from_blob. Temp. tensors would still be managed by PyTorch.

1 Like