Issue with conversion from Numpy

I created a very simple autoencoder and have been looking to feed in data that I have converted from Numpy. I found that when I convert from numpy “float” array using torch.from_numpy it gives me an array of torch DoubleTensor.

When I feed this into my model it causes problems with the linear layer, giving the message:

torch.addmm received an invalid combination of arguments…

when I convert the torch double tensor to type float this error goes away.

Is this a bug or is it a requirement for only feed float and not double into the linear layer?

Sorry if this is an obvious question but I couldn’t see any notes in the documents covering this and I’m sure others will find the same issue at some time.

Hi, I need more information especially about the errors, but I think it is because nn.Linear 's weight is float. Try

1 Like

Thanks, I did get it to work by creating a new array:


which I think is doing the same thing as you are suggesting. My concern was that whilst I can get it to work others are likely to find the same since most numpy float arrays seem to be 64 bit and hence convert to Double in Pytorch. I would therefore think that it is worthwhile flagging this as a potential issue in the documentation? It took me quite a while to track down the problem and I would like to avoid others wasting their time.

Many thanks


You could also do this: torch.from_numpy(my_array).type(torch.FloatTensor)

Many thanks Vishwak, I wasn’t aware that you could control type in this way when converting from numpy, that looks a good solution.



I think the error message is not good because the user does not use torch.addmm but nn.Linear so nn.Linear should throw an Error. Otherwise, PyTorch needs automatic casting.

1 Like