Datatype of Bias (FP32?) in FP16 models

In Int8 models, bias term is int32,
In similar vein, in FP16 models, does the bias term need higher precision (FP32)?

Right now this is FP16 too, but it’s likely we will push higher precision as well here.