How to initialize tensors such that memory is allocated?

Hello everyone,
I am currently implementing a replay buffer and I want to preallocate the tensors for this buffer. One reason is that I want my code to crash early in case that there is not enough memory for the complete replay buffer. Everything is running strictly on CPU
However, I am wondering how I can allocate this tensor so that it uses the maximum possible memory size? I tried torch.zeros(*my_shape, dtype=torch.float) and torch.rand(*my_shape, dtype=torch.float) and latter seems to use a lot more memory. I am wondering whether torch is doing some smart memory saving things for former?

Best regards

It seems like rand needs additional memory to generate the random numbers, but then uses similar memory to zeros.