How does torch.rand() sample from uniform distribution?

I am writing an ML framework in Rust and I would like it to produce the same random numbers as PyTorch.

Specifically, I would like to produce the same values as torch.rand(), provided that both the PyTorch generator are my RNG are seeded with the same value. I have verified that my RNG produces the same raw values as PyTorch’s internal CPU RNG (CPUGeneratorImpl).

From what I can tell, torch.rand() uses the generator to sample from a uniform distribution in the range [0.0, 1.0). When I tried finding the implementation of that function call, though, I got lost in dispatchers (I have not been able to build PyTorch from source, so I wasn’t able to trace with the debugger).

I have gotten my RNG, NumPy’s RNG, Python’s built-in RNG, and Rust’s mersenne twister RNG to all produce the same values when seeded appropriately and sampling uniformly from [0.0, 1.0), so it seems like there’s something I’m missing when it comes to PyTorch.

After poking around in the commit history, I found this code in an old version (a3442f62bc2d01b64e554e925e9e0efa89c5e4c3) of PyTorch:

// floats have 23 bits of mantissa (fractional part)
static uint32_t FLOAT_MASK = (1 << 24) - 1;
static float FLOAT_DIVISOR = 1.0f / (1 << 24);

/* generates a random number on [0,1)-double-interval */
static float uniform_float(THGenerator *_generator)
{
  uint32_t x = (uint32_t)THRandom_random(_generator);
  return (x & FLOAT_MASK) * FLOAT_DIVISOR;
}

I don’t know that this exact code exists in the current version of PyTorch, but that code does let me produce the same values as torch.rand().