What is current method for serializing FP16 to Caffe2 Tensor?

A question about loading float16 tensors in caffe2… the current code is a bit ambiguous as to whether the current supported method is to serialize through int32_data or through byte_data

in https://github.com/pytorch/pytorch/blob/a228a95b941218018bdb5fcf785a64522352f266/caffe2/core/blob_serialization.cc#L19 the flag caffe2_serialize_fp16_as_bytes is set to false, and following this through the codebase, it suggests that default behavior to serialize float16 as unsigned shorts in the tensor’s int32_data

however, line 518 of the same file suggests that packing in the int32_data field is backward compatibility behavior… https://github.com/pytorch/pytorch/blob/a228a95b941218018bdb5fcf785a64522352f266/caffe2/core/blob_serialization.cc#L518

So, I’m a bit confused – which is the current default behavior? to pack float16’s into byte_data or into
int32_data? And is there a way to choose this at build time?

Nate Segerlind