the neural network is defined as
class networka(nn.Module):
def init(self):
super(networka,self).init()
self.network=nn.Sequential(
nn.Linear(19,11),
nn.ReLU(True),
nn.Linear(11,3),
nn.Softmax()
)
def forward(self, x):
x=self.network(x)
return x
A=networka()
This code works
z = Variable(torch.randn(19,19))
A(z)
But this doesn’t
X_data=(np.random.rand(19,19))
X=Variable(torch.from_numpy(X_data))
A(X)
Could anyone tell me why?
You should get an error message:
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #2 'mat1' in call to _th_addmm
which points to a type mismatch in an operation.
Numpy uses float64 by default, while PyTorch uses float32.
You could either transform the data to float32 via X = X.float()
, which would be the usual use case or alternatively you could transform the model parameters to float64 via model = model.double
.
Also, Variables
are deprecated since PyTorch 0.4.0
, so don’t use them.
1 Like
Thank you very much for your help!
ptrblck via PyTorch Forums noreply@discuss.pytorch.org于2020年1月19日 周日01:58写道: