TensorAccessor operations

TensorAccessor APIs

If one uses TensorAccessor<T, N>, is there a way to leverage high level operations with the performance benefits of bypassing dynamic typing?

In other words, can we do things like

auto X_ = X.accessor<float, 1>();
auto Y_ = Y.accessor<float, 1>();
at::Tensor Z = torch::dot(X_, Y_);

Similar to how we work with Tensor types?

No. Accessors are more or less “array-like” interfaces to the memory of strided tensors.

Best regards

Thomas