[solved] Recieved invalid combination of arguments , fix?

So I’m building a Sentiment analyzer , but having problems training it .
Here’s my sample neural network :

train_vecs_w2v = np.concatenate([word_vector(z ,size) for z in tqdm(map(lambda x : x.words , X_train))])

class net(nn.Module):
	def __init__(self):
		super(net , self).__init__()
		self.l1 = nn.Linear(200, 32)
		self.relu = nn.ReLU()
		self.l2 = nn.Linear(32 , 1)
	def forward(self , x):
		x = self.relu(self.l1(x))
		x = self.l2(x)
		x = F.sigmoid(x)
		return x

net = net()

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(net.parameters() ,lr = learning_rate)
inputs = Variable(torch.from_numpy(train_vecs_w2v))
targets = Variable(torch.from_numpy(Y_train))

for epoch in range(num_epochs):
	
	optimizer.zero_grad()
	outputs = net(inputs)
	loss = criterion(outputs , targets)
	loss.backward()
	optimizer.step()

	if(epoch+1)%5 ==0:
		print('Epoch [%d%d] , Loss : %.4f'%(epoch+1 , num_epochs , loss.data[0]))

Inputs has a dimension of 959646 X 200
targets - 959646 X 1

I’m getting this error:

TypeError : addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of:
 * (torch.DoubleTensor mat1, torch.DoubleTensor mat2)
 * (torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
 * (float beta, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
 * (float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
 * (float beta, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
 * (float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)
 * (float beta, float alpha, torch.DoubleTensor mat1, torch.DoubleTensor mat2)
 * (float beta, float alpha, torch.SparseDoubleTensor mat1, torch.DoubleTensor mat2)

The inputs variable is wrapped in Variable() , can’t figure it out .
Thanks.

Fixed ,
had to use .float() , to the tensor array.

Still cannot understand why this happens , my numpy array was already float ?

Numpy’s default is float64 aka double, pytorch defaults to float aka float32.

Best regards

Thomas

1 Like