Pytorch DoubleTensor operations are slow while FloatTensors autograd precision is low

Well, if you need 1e-8 precision you don’t really have a choice, you need to work with doubles I’m afraid.