Reading the documentation https://pytorch.org/cppdocs/notes/tensor_basics.html it looks like accessor() takes a template value which specifies the dimensionality. Is this value allowed to be different than the source tensor? For example if I want to access a tensor as a logical 1-d array, but the source tensor is a possibly noncontiguous (coming from a slice operation) n-d tensor, am I allowed to do `my_tensor.accessor<float, 1>()`

?

# How to use the Tensor::accessor

I think I found the answer from the generated source code `Tensor.h`

```
// Return a `TensorAccessor` for CPU `Tensor`s. You have to specify scalar type and
// dimension.
template<typename T, size_t N>
TensorAccessor<T,N> accessor() const& {
static_assert(N > 0, "accessor is used for indexing tensor, for scalars use *data<T>()");
TORCH_CHECK(dim() == N, "expected ", N, " dims but tensor has ", dim());
return TensorAccessor<T,N>(data<T>(),sizes().data(),strides().data());
}
```