Nevergrad float32 training

I am trying to use Nevergrad for derivative-free method to train a neural network. It seems that when I use the optimization, it will auto cast the model to float64, but I want it to stay at float32 (or float16), how would I do that?

Here’s the test snippet:

Well, I’m a dumb dumb. Apperantly it’s just because of the .to(dtype=torch.float64) I copied from other places. And after changing all to .to(dtype=torch.float32), and make sure torch.from_numpy also comes with one, it starts working.