When to convert Python Float to Pytorch Tensor

Is there a benefit/downside to converting a Python float to a Pytorch Tensor? Does it make a difference for speed and/or device placement?

a = torch.Tensor([1.])

# option A 
b = a + 2. 

# option B
c = a + torch.Tensor([2.])

Try profiling it, whether it’s useful or not really depends on whether you’re going to do more PyTorch operations on the Python float