Init tensor cause NAN

hi there,
I just init a new FloatTensor like
but I receive nan.

This didn’t always happen, looks like a random event.
Did anyone meet same thing like this, it confused me for a long time.

You didn’t tell PyTorch what to put in the tensor, so it just allocated the memory space without altering the contents of that memory space.

To initialise a new FloatTensor full of zeros do this…


To initialise a new FloatTensor full of random values uniformly chosen between 1.0 and 2.5 do this…

torch.cuda.FloatTensor(10).uniform_(1.0, 2.5)

Other options here