How to convert 3d images into libtorch tensor

If it was ordinary 2d images, we could load them by opencv, and convert it to the tensor like this

auto tensor_img = torch::from_blob(buffer_img_float_.data,
                                           {1, buffer_img_float_.rows, buffer_img_float_.cols, 3}).to(get_device());

But what if the input tensor need to have 5 dimensions?

Like

torch.zeros((1,64,3,224,224)) #batch, slices, channels, height, width

I can load the nifti by simpleITK and extract the pixels, but how could I convert it to the libtorch tensor?What is the order I should feed the data into the vector<float>

Or just

at::Tensor foo = at::ones({1, 64, 3, 224, 224}, at::kFloat);

Then copy the pixels to this tensor one by one?

auto foo_a = foo.accessor<float,5>();
 foo_a[0][slice][channel][row][col] = pixel_value[0][slice][channel][row][col]