Bfloat16 on nvidia V100 gpu

is_bf16_supported() checks is the used CUDA toolkit used to build PyTorch supported BF16 as well as if the compute capability of the device is Ampere+. However, even if these checks fail, the test will try to allocate a BF16 tensor and returns the support based on the success of the tensor creation.
IMO, creating a tensor alone is not sufficient to claim BF16 is supported as math operations could fail (and will in the case of Volta devices).