I tend to use .type() a lot when I write my own Dataset class. Need some tips

For example inputs should be of type FloatTensor but for some reason when loaded as numpy arrays I get DoubleTensors. Apart from this I use .view a lot. for example to pass labels to the loss I convert (4,1) to 4 i.e use View(4). Would these cause performance issues? I feel like it’s a bad practice using such things over and over again, the way pytorch is designed I think I can avoid this because everything else seems so elegant, I am missing subtle reasons. If anyone could help me! please reply.

You could load your numpy arrays as dtype=np.float32, if that’s possible. This would make them torch.FloatTensors automatically.

Also, .view is a cheap operation, since it just returns a new “view” on the Tensor without copying any data.
You should be fine using it.