Why does .cpu() yield TensorBase

For certain I/O functionality I need to transfer tensors that potentially reside in GPU memory to host memory. The following code works

torch::tensor a;
auto a_cpu = a.cpu();
auto a_accessor = a_cpu.accessor<float, 1>();

However

auto a_accessor = a.cpu().accessor<float, 1>();

does not compile with the following compiler error

libtorch/1.13.1/include/ATen/core/TensorBase.h:562:23: note: candidate function [with T = double, N = 1] has been explicitly deleted
  TensorAccessor<T,N> accessor() && = delete;

Is it possible to do the transfer to host memory inline?