Beginner question about astype("float32") and float()

In the customized dataset file, in a multi-label context,
1/Is there a reason for the use of float() in addition to .astype(“float32”) in this code?
2/And why cast the labels to float and not leave it as a numpy array of integers(one-hot encoded)?

labels = torch.from_numpy(

Thanks in advance

I don’t think there is need to do so. But you check the value of labels.dtpe with and without float().

Why convert to float? I don’t think there is any important reason, maybe the method they implement under the hood requires these values to be in float. In PyTorch 1.3 type promotion was updated so I think we can leave this step also.

1 Like