[solved]Loss.backward() RuntimeError: ConcatBackward returned a gradient different than None but the corresponding input was not a Variable

I am not familiar with pytorch, so it is more likely that I’ve made some mistake in my code.
And I’m sure that I’ve never modified Variables.data directly…

My error here:

Traceback (most recent call last):
  File "main.py", line 65, in <module>
    optimizer.step(closure)
  File "/usr/lib/python3.6/site-packages/torch/optim/rmsprop.py", line 46, in step
    loss = closure()
  File "main.py", line 62, in closure
    loss.backward()
  File "/usr/lib/python3.6/site-packages/torch/autograd/variable.py", line 156, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/usr/lib/python3.6/site-packages/torch/autograd/__init__.py", line 98, in backward
    variables, grad_variables, retain_graph)
RuntimeError: function ConcatBackward returned a gradient different than None at position 3, but the corresponding forward input was not a Variable

and my model here:

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from fuckio import TeamIO
from dataset import LoadDataSet

#batch_size = 32

teams = TeamIO.LoadDat('data/GoodTeamData.csv')
teams = [np.squeeze(np.asarray(np.mat(team))) for team in teams]
teams = [torch.FloatTensor(team) for team in teams]

train = LoadDataSet('data/GoodTrainData.csv')
test = LoadDataSet('data/GoodTestData.csv')


class mmodel(torch.nn.Module):
    def __init__(self):
        super(mmodel, self).__init__()
        self.linear1 = nn.Linear(21,4)
        self.finalConv1 = nn.Linear(24,4)
        self.finalConv2 = nn.Linear(4,1)

    def TeamFeature(self, nteam):
        global teams
        team = teams[nteam]
        res = [torch.unsqueeze(self.linear1(Variable(p, requires_grad=True)), 1) for p in team]
        res = torch.cat(res, 1)
        area_ave = torch.sum(res, 1) / res.data.shape[1]
        area_var = torch.var(res, 1)
        area_n = res.data.shape[0] #############I've tried to remove this variable but not work at all...############
        o = torch.cat((area_ave, area_var, Variable(torch.FloatTensor([area_n]))), 0)
        return o # a 

    def BattleFeature(self, x):
        ta, tb = int(x[0]), int(x[1])
        o = torch.cat((self.TeamFeature(ta), torch.FloatTensor(x[2:5]), self.TeamFeature(tb), torch.FloatTensor(x[5:8])), 0)
        return o # a long vector, size 24x1

    def forward(self, x):
        f = self.BattleFeature(x) #Feature vector about this battle.
        f = F.tanh(self.finalConv1(f))
        f = F.sigmoid(self.finalConv2(f))
        print(f)
        return f

# currently trying simple regression
m = mmodel()

#train
optimizer = torch.optim.RMSprop(m.parameters(), lr=0.02, alpha=0.98, eps=1e-08, weight_decay=0, momentum=0, centered=False)

for input, target in train: # input is a python list, target is a python float (possibility for guest to win).
    def closure():
        optimizer.zero_grad()
        output = m(input)
        loss_fn = nn.MSELoss(size_average=False)
        loss = loss_fn(output, Variable(torch.FloatTensor([target])))
        loss.backward() #######################################Error here#################################
        print('curr_loss =', loss.value)
        return loss
    optimizer.step(closure)

# test
print('Output:Target')
for input, target in test:
    print('{}:{}', m(input).value, target)

To reproduce this error, simply get https://recolic.net/tmp/seed_repack.tar.gz and run main.py.

Thanks a lot for your help!

I couldn’t run it. Probably because in L40 of main.py you concatenated variable with tensor?

1 Like

I tried to edit L40 and it works!
’’‘
o=torch.cat((self.TeamFeature(ta),Variable( torch.FloatTensor(x[2:5])), self.TeamFeature(tb), Variable(torch.FloatTensor(x[5:8]))),0)
’’’

Thanks a lot!

1 Like