Avoid writeable warning when copying numpy into pytorch tensor

Hey,

I’m trying to optimize my data loading pipeline, and I’ve run into this warning:

UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at …\torch\csrc\utils\tensor_numpy.cpp:180.)

My specific use case it that I load a block of bytes from disk and want to get that into a pytorch tensor. Here is a simplified example:

# just write some dummy data
with open("data.txt", "wb") as f:
    f.write(b"test")

# I want to get the file data into this tensor in only a single copy
result = torch.empty(1024)

with open("data.txt", "rb") as f:
    buffer: bytes = f.read()
    array: np.array = np.frombuffer(buffer, dtype=np.float32)
    result[0:4] = torch.from_numpy(array)

The warning triggers on the last line of my example code. Of course it is right, and in general this is a dangerous thing to do, but in this specific case the wrapped tensor is only used to get data into a real tensor and then immediately discarded.

Just removing torch.from_numpy results in the following error:

TypeError: can’t assign a numpy.ndarray to a torch.FloatTensor

Is there a way to directly copy data from numpy into PyTorch without triggering the original warning? I know I can disable the warning but that’s unsafe since I might make this mistake for real somewhere else in the codebase.

Thanks!