Output shape issue

I’m trying to make a simple model where given a number it gives output with +2 number
Ex1:- input - 5, output - 7
Ex2:- input - 1, output - 3 so on…

  1. The output features of the model is clearly given 1, still I’m getting output with shape of 1X2
  2. I could not use the batch normalization here, can anyone help me in understanding the issue here.
class NN(nn.Module):
    def __init__(self):
        super(NN, self).__init__()
        
        self.ff = nn.Linear(in_features=1*BATCH_SIZE, out_features=15,)
        self.batch_norm = nn.BatchNorm1d(15 )
        self.ff1 = nn.Linear(in_features=15, out_features=1)
        
    def forward(self, x):
        print(f"x : {x}")
        print(f"x shape : {x.shape}")
#         x = x.view(-1, 1)
        ff = self.ff(x)
        
        print(f"ff value : {ff}")
        print(f"ff : {ff.shape}")
        
#         ff = ff.view(1, -1)
        
#         ff = ff.squeeze()
        
        print(f"ff view shape : {ff.shape}")
        ff = self.batch_norm(ff)
        
        out = self.ff1(ff)
        
#         out = ff
        
        print(f"returning output as : {out}, {out.shape}")
        out = out.squeeze()
        
        print(f"returning output as : {out}, {out.shape}")
        return out
        

output:

returning output as : tensor([[-0.4479],
        [ 0.4698]], device='cuda:0', grad_fn=<AddmmBackward0>), torch.Size([2, 1])
returning output as : tensor([-0.4479,  0.4698], device='cuda:0', grad_fn=<SqueezeBackward0>), torch.Size([2])
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Linear-1                   [-1, 15]             975
       BatchNorm1d-2                   [-1, 15]              30
            Linear-3                    [-1, 1]              16
================================================================
Total params: 1,021
Trainable params: 1,021
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.00
Estimated Total Size (MB): 0.00
----------------------------------------------------------------

output while training:

ValueError: Expected more than 1 value per channel when training, got input size torch. Size([1, 15])

Hello,
When you defined your first layers in the in_features you must just pass the input features of just one example (in this case in_features=1 if i understand correctly), PyTorch will take care of the batch dimension.