Torch.bfloat16 < how does it work? in bf 16 model

I want to experiment on fp32 model, int8 model, and bf16 model, how can I get bf16??

One of the ways I found it is to make the fp32 model bf16 with torch.bfloat16.

I’m wondering if it’s expressed as bf16 using the fp32 value value as it is, and what specific rule does it count as. Also, I don’t know if I can specify this as bf16 in paper (without fine tuning)

model_squeezenet = models.squeezenet1_1(pretrained=True)
model_squeezenet.eval()

# Convert to BF16 and move to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_squeezenet = model_squeezenet.to(device).to(torch.bfloat16)```

I’m wondering if it’s expressed as bf16 using the fp32 value value as it is, and what specific rule does it count as.

When you convert a model from float32 to bfloat16 with model.to(torch.bfloat16), every parameter and buffer will be cast to bfloat16, with rounding to nearest even. If you are interested in the details, the code is here: pytorch/c10/util/BFloat16-inl.h at d3fc13a9dd186ceb8d1b56b0968a41686ea645cd · pytorch/pytorch · GitHub .