Intuitively, DoubleTensor will bring more accuracy than FloatTensor, but FloatTensor will bring less Calculation than DoubleTensor. Are there any principles or best practices about how to chose the right dtype for deep learning?
If you are on a GPU and FloatTensor works, take that. If it doesn’t work (e.g. large covariance matrices arising in Gaussian Processes can be tricky), try double, potentially selectively. GPU with double means a giant performance hit.
If you are on (x86) CPU, they say that the speed issue is much less pronounced to negligible.