IndexError: Dimension out of range

Hello,

I receive this error message following the launch of this function, I confess that it is a little fuzzy, especially since my colleague did not specify his variables, on his computer the code works as it is. So I don’t see much of a code problem, maybe a version problem (I’m really new, I may have wrong) ?

 def disc_loss(fts, y, conf):
    N,C = fts.size()
    y = y.contiguous()         

    try:
        num = y.max().data[0] + 1
    except:
        num = y.max().item() + 1
    rs_var, rs_dist, rs_center = [], [], []
    cc = []
    for k in range(num):
        msk = y.eq(k).view(-1,1).expand_as(fts)
        val = fts.masked_select(msk).view(-1,C)
        cur_center = val.mean(0)
        rs_center.append(cur_center)
       
        val = torch.norm(val-cur_center.expand_as(val),2,1)   
        cc.append(val)
        val = val-conf['c0']
        val = torch.clamp(val,min=1.e-7)
        val = val*val                             
        
        rs_var.append(val.mean()) 
    dd =[]
    for i in range(1,num):          
        for j in range(num):
            if i>=j: continue
            dd.append((rs_center[i]-rs_center[j]).norm(p=2))
            val = 2*conf['c1']-(rs_center[i]-rs_center[j]).norm(p=2) 
            val = torch.clamp(val, min=1.e-7)
            val = val*val                               
            rs_dist.append(val)
    
    rs_center_ = torch.stack(rs_center).view(-1,C)
    rs_center = torch.norm(rs_center_,2,1)
    rs_center = torch.clamp(rs_center, min=1.e-7)

    rs_add = torch.norm(rs_center-6*conf['c0'],2,1)  
    rs_add = torch.clamp(rs_add, min=1.e-7)
    
    a,b,c,d = conf['abcd']
    loss = a*torch.stack(rs_var).mean() + b*torch.stack(rs_dist).mean() + d*rs_add.mean() 
    return loss
File "/home/VCNN/scripts/vcnnmodel.py", line 167, in disc_loss
    rs_add = torch.norm(rs_center-6*conf['c0'],2,1,keepdim=True)   
  File "/home/anaconda3/lib/python3.7/site-packages/torch/functional.py", line 769, in norm
    return torch._C._VariableFunctions.norm(input, p, dim, keepdim=keepdim)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

FORMAT VARIABLES

fts : tensor([[ 1.8531, -2.9964,  0.9598,  ..., -0.6220,  0.0220, -0.1722],
        [ 1.0035, -0.9662,  0.3225,  ..., -0.6707,  0.5081,  0.3552],
        [ 0.1899, -1.8759,  0.8710,  ..., -0.2789,  1.3850, -0.4163],
        ...,
        [-0.2454,  0.1436,  0.5875,  ...,  1.8487,  2.6752,  2.0626],
        [ 0.2815,  0.2250, -1.5668,  ...,  1.5377,  3.6202,  0.1742],
        [ 0.1593,  0.8056,  0.2085,  ...,  1.5562,  2.6729,  0.8995]],device='cuda:0', grad_fn=<IndexSelectBackward>)
torch.Size([455, 32])


y : tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
        3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1,
        1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
        3, 3, 1, 3, 1, 1, 1, 1, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 3, 4, 3,
        4, 3, 3, 3, 1, 3, 4, 1, 1, 4, 4, 1, 1, 1, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5,
        5, 5, 5, 0, 0, 0, 5, 4, 3, 4, 3, 3, 3, 4, 4, 4, 4, 4, 1, 1, 3, 3, 3, 4,
        4, 4, 1, 4, 4, 4, 4, 1, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
        5, 5, 5, 5, 5, 5, 0, 5, 5, 5, 5, 5, 0, 0, 5, 5, 5, 5, 5, 0, 0, 0, 0, 0,
        0, 0, 5, 5, 5, 5, 1, 0, 0, 3, 1, 1, 0, 3, 3, 3, 0, 4, 1, 1, 0, 3, 3, 3,
        4, 4, 4, 4, 4, 1, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
        5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 0, 0, 5, 5, 5, 5, 5, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 0, 1, 1, 5, 5, 5, 5, 5, 5, 5, 5, 2, 5, 5, 5, 5, 2, 2,
        2, 2, 5, 5, 5, 5, 0, 2, 2, 5, 5, 5, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 0, 5,
        5, 0, 0, 0, 0, 0, 0, 0, 2, 5, 2, 2, 5, 5, 2, 2, 2, 2, 5, 2, 2, 5, 5, 5,
        5, 5, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
       device='cuda:0')
torch.Size([455])


conf : {'c0': 1, 'c1': 3, 'abcd': [1, 1, 0.01, 0.01]}

1 Like

Given this error I would think something in your code want some input size to be [N, M] and not just [N,]
Your y tensor in shape just [455]. You may try make it [455,1]

And even better would be to check, if input to this function has 1st - dimension and if not, trace where it looses it.

How can I make it, because y tensor are labels of segmentation (instance segmentation) ?

For the input to this function it has 1-dim as follow : tensor([ 8.4146, 6.0662, 6.8091, 8.1898, 10.7489, 10.3809], device='cuda:0', grad_fn=<SubBackward0>)

It is a complicated function. I would try to debug from here, as it is a line getting you into the trouble. As I already mention, the simplest first try could be just to print the shape of the tensor, which is the input in this function.

The shape of the tensor rs_center-6*conf[‘c0’] is

tensor([ 8.4146, 6.0662, 6.8091, 8.1898, 10.7489, 10.3809], device='cuda:0', grad_fn=<SubBackward0>) torch.Size([6])

torch.norm function is trying to work on dimension 1 of your input (the third argument = 1). But you input tensor has only 0th dimension (6 elements on 0th dimension). That is why the error appears. As I mention, it is complicated function and I don’t have time to inspect it, unfortunately. I suggest trace operations done on rs_center-6*conf[‘c0’] tensor back and find where it looses the dimension it has to have.

Hmm in fact there it is :

rs_center_ = torch.stack(rs_center).view(-1,C)

rs_center_ gives me torch.Size([6, 32])

rs_center = torch.norm(rs_center_,2,1)

rs_center gives me torch.Size([6])

I don’t understand the function, except it is some custom-made loss. But as far as I can tell, the author of the function was sure the input will have 1st dimension. It could be possible that while copying the function rs_center_ and rs_center were misplaced.

OK then I’ll try to find another loss funt (with " same variables (fts, y, conf) if possible).
What do you mean by misplaced ? Exchanged ?

Thanks a lot btw, I’ll give up haha

Ok I may found something

I got this text right before in my exe command

/home/VCNN/scripts/vcnnmodel.py:101: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return self.softmax(y_flat),y_conv
class softmax_out(nn.Module):
    def __init__(self, in_channels, out_channels, criterion):
        super(softmax_out, self).__init__()
        self._K = out_channels
        self.conv_1 = nn.Conv3d(in_channels, out_channels, kernel_size=3, padding=1)
        self.conv_2 = nn.Conv3d(out_channels, out_channels, kernel_size=1, padding=0)
        if criterion == 'nll':
            self.softmax = F.log_softmax
        else:
            assert criterion == 'dice', "Expect `dice` (dice loss) or `nll` (negative log likelihood loss)."
            self.softmax = F.softmax

    def forward(self, x):
        y_conv = self.conv_2(self.conv_1(x))
        y_perm = y_conv.permute(0, 2, 3, 4, 1).contiguous()
        y_flat = y_perm.view(-1, self._K)
        return self.softmax(y_flat),y_conv
class VCNN(nn.Module):
    def __init__(self, K, criterion):
        super(VCNN, self).__init__()
        self.conv_1 = conv3d_x3(1, 16)
        self.pool_1 = conv3d_as_pool(16, 32)
        self.conv_2 = conv3d_x3(32, 32)
        self.pool_2 = conv3d_as_pool(32, 64)
        self.conv_3 = conv3d_x3(64, 64)
        self.pool_3 = conv3d_as_pool(64, 128)
        self.conv_4 = conv3d_x3(128, 128)
        self.pool_4 = conv3d_as_pool(128, 256)

        self.bottom = conv3d_x3(256, 256)

        self.deconv_4 = deconv3d_x3(256, 256)
        self.deconv_3 = deconv3d_x3(256, 128)
        self.deconv_2 = deconv3d_x3(128, 64)
        self.deconv_1 = deconv3d_x3(64, 32)

        self.out = softmax_out(32, K, criterion)
        self.out2 = conv12(32+K,32)          
        self.K = K

As the call of model is right before loss function, maybe there is a link with that.