Is it safe to expose Tensor.data_ptr() to the outside world?

I’m working on a library that processes on a Tensor that needs to take values from the external world. The tensor is fixed-size during the whole lifetime of the program. Something like below:

in my library:
A singleton fixed size tensor x, initialized when the program starts.
Some functions that process x periodically.

external users:
fill in / update x every now and then, and then call the processing functions in my library to get the processed results.

Now due to API constraints, I can only expose raw data pointer to the external users. In this case, is it safe to expose static_cast<float*>(x.data_ptr()) to the outside world and let the downstream client modify the tensor values by it? Assuming my tensor was initialized with a given size (and dtype=float) and not altered anywhere in my own library.

If this is a bad practice, what should I do instead? Are there any other APIs that I can expose to be more idiomatic? (I know sharing data by exposing mutable memory address is not good in general, but let’s say we really want to do it for performance reasons.)

As long as you are confident that the lifetime of the Tensor matches its usage, I don’t see an issue here, as something similar has to be done when dispatching to operators in external libraries (e.g. cuDNN).

However, if the mutation is relevant for things like calling .backward later, I would check that versioning semantics are correct for your application: Autograd mechanics — PyTorch 1.12 documentation.

1 Like

Thank you for the education! That clears my concerns.