Torch::max() without dynamic memory allocation

If I run the following code…

    torch::Tensor t = torch::rand({5});
    torch::Tensor m = torch::zeros(0);
    for (int i=0; i<5; ++i) {
      m = torch::max(t);
      std::cout << m.data_ptr() << std::endl;
    }

…I get the following output:

0x558373cc8e00
0x558373cda3c0
0x558373cda480
0x558373cda4c0
0x558373cda580

As you can see, the data pointer is constantly changing, indicating that there is some dynamic memory allocation going on. How can I rewrite the line m = torch::max(t); to prevent the data pointer from changing on each loop iteration?

I ultimately just want the max value as a float, so I am fine with a solution that just directly returns a float.

For some other functions, the answer to the analogous question to the one I ask is “use the *_out() version”. For example, there is torch::softmax() and torch::softmax_out(). For torch::max(), however, torch::max_out() does not appear to follow this same idiom. At least, I couldn’t figure out how to make torch::max_out() work, and also could not find any documentation on the arguments/return-value of the function.