Seeing that we get the same bernoulli_ after uniform_, I think the seed is managed regardless of whether we are operating on float32 or float16.
By the way, unlike the result I got from the code, I want to make the normal_() obtained from the tensor of float16 the same as the value obtained from the normal_() generated from float32 (without type casting).
Any ideas?
import torch
torch.manual_seed(42)
print(torch.empty(10, dtype=torch.float32).uniform_())
print(torch.empty(10, dtype=torch.bool).bernoulli_())
torch.manual_seed(42)
print(torch.empty(10, dtype=torch.float16).uniform_())
print(torch.empty(10, dtype=torch.bool).bernoulli_())
tensor([0.8823, 0.9150, 0.3829, 0.9593, 0.3904, 0.6009, 0.2566, 0.7936, 0.9408,
0.1332])
tensor([ True, False, False, False, False, True, False, False, False, False])
tensor([0.5498, 0.7124, 0.4199, 0.6318, 0.5518, 0.5347, 0.8418, 0.5098, 0.7998,
0.0591], dtype=torch.float16)
tensor([ True, False, False, False, False, True, False, False, False, False])