Some general questions about pytorch model

  1. why this * is used with the input forward function?
def forward(self, *input):
  1. If the shape of the output of a model is 32*4. I am not getting why this unsqueeze(1) here:
 out = model(x.unsqueeze(1))
  1. This question is really silly. But please forgive me.
    If x is 8*64 and it is passed to a fully connected layer as:
 self.fc = nn.Linear(in_features=8, out_features=4) 
 x = self.fc(x)

Then how many weight matrices will be there 256*512 and then 4*256, Here 512 is from 8*46 am I right ?

Regards

I got the answer for the second: It will add a dimension at first position i.e. 32*1*4.

  1. the * in front if the input allows for a variable number of arguments as seen here:
def myfun(*inputs):
    print(inputs)

# single input
myfun(torch.tensor(1))
> (tensor(1),)

# list input
myfun([torch.tensor(1), torch.tensor(1)])
> ([tensor(1), tensor(1)],)

# multiple inputs
myfun(torch.tensor(1), torch.tensor(1))
> (tensor(1), tensor(1))
  1. The linear layer will have a weight parameter in the shape [4, 8] and an input in the shape [8, 64] will raise a shape mismatch error, since 8 input features are expected.
1 Like

@ptrblck Thank you sir. Plz, comment on whether I am correct with the second answer. Also If I am printing x.shape in this forward function,

def forward(self, *input):
        #print(input[0].shape)
        xa = self.conv1a(input[0])
        
        xa = self.bn1a(xa)

        xa = F.relu(xa)
        xb = self.conv1b(input[0])
        xb = self.bn1b(xb)

        xb = F.relu(xb)
        x = torch.cat((xa, xb), 1)
        x = self.conv2(x)
        x = self.bn2(x)

        x = F.relu(x)
        x = self.maxp(x)
        ..
..

        x = self.conv5(x)
        x = self.bn5(x)

        x = F.relu(x)
        print(x.shape)

I am getting something like this

torch.Size([7, 80, 6, 15])
torch.Size([5, 80, 6, 15])
torch.Size([2, 80, 6, 15])
torch.Size([6, 80, 6, 15])
torch.Size([5, 80, 6, 15])
torch.Size([12, 80, 6, 15])
torch.Size([8, 80, 6, 15])

Sir I know the shape is printed as many times as the total_samples, Sir @ptrblck Here 80 is the output channels, 6*15 is the shape of each channel output. But I am not getting why this o dimension is changing every time.
Regards

Assuming dim0 represents the batch dimension, the activation shapes seem wrong, since the number of samples would change at these points.
Could you post an executable code snippet, which would reproduce this behavior?

@ptrblck sure sir. I will in some time. What is activation shape?

By activation I meant the outputs of any layer, i.e. the tensors you pass from one layer to the next in the forward method:

def forward(self, input):
    activation = self.layer(input)
    activation = self.other_layer(activation)

Oh okay sir!
But I think the shapes are okay otherwise, the model would have thrown error right?