Generate label score results for NLLLoss2d

Hello everyone!

I’m trying to build a neural network that can generate label score for classification and use the NLLLoss2d function.

I have 1000 samples, each sample is a vector of 180 entries. Thus the input is a 1000x100 matrix. For each sample in the input, I am trying to generate two 3x5 matrices, containing scores for the two labels. Thus the output should be a 1000x2x3x5 tensor as described in the doc of NLLLoss2d function.

In the network, I used an array of nn.Linear() functions:

class Net(torch.nn.Module):
    def __init__(self):       
        super(Net, self).__init__()
        self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]
        
    def forward(self, x):   
        y = [[[0,0] for i in range(5)] for j in range(3)] 
        for i in range(3):
            for j in range(5):
                y[i][j]=torch.nn.functional.log_softmax(nnFunc.relu(self.linear[i][j](x)))                 
        return y

However, the output of the network is a list of dimension 3x5x1000x2. The first two dimensions are of list type, and the last two dimensions are of Variable type.

I am trying to permutate the tensor I got from the network, but I’m not sure it could be used later by the backward-propagation function.

I would appreciate for any help!

1 Like

yes you can call torch.permute on the output to make it to your required shape, and backpropagation happens correctly.

Thanks for your reply. I tried using permute in the model, but it showed the error

there are no graph nodes that require computing gradients.

The network I was using is

class Net(torch.nn.Module):
    def __init__(self):       
        super(Net, self).__init__()
        self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]
    
    def forward(self, x):   
        N = len(x)
        y = torch.Tensor([[np.zeros((N,2)) for i in range(5)] for j in range(3)])
        for i in range(3):
            for j in range(5):
                y[i][j]=torch.nn.functional.log_softmax(nnFunc.relu(self.linear[i][j](x))).data
                     
        return Variable(y.permute(2,3,0,1))

I have to put Variable() wrapper there, otherwise it would show the error

‘float’ object has no attribute ‘getitem

replace:

self.linear = [[torch.nn.Linear(100, 2) for i in range(5)] for j in range(3)]

with

self.linear = nn.ModuleList([nn.ModuleList([nn.Linear(100,2) for i in range(5) ])for j in range(3)])

This makes your model parameters visible, so to say.

Thanks for your reply! One more thing I did to make the code running is to add

requires_grad=True

in the Variable().

Now I got zero compilation error and the code could be executed.

However, I found that the parameters do not change after I did the

optimizer.zero_gradient()
loss.backward() 
optimizer.step() 

process. This means the gradient the code calculated is zero. Do you think this is to do with how I got the return value y which somehow makes the optimizer treat it as a constant?

Enlightened by this thread, I found the solution, which is to use the torch.stack() function. So to get the output y, I did
y = torch.stack([torch.stack([nnFunc.log_softmax(nnFunc.relu(self.linear[i][j](x))) for j,m in enumerate(l)],0) for i,l in enumerate(self.linear)],1)

and then use the torch.permute() function:

 return y.permute(2,3,1,0)