I’m try to modify Inception v3 pre trained in pytorch to have a multi-output. ( 4 output precisely).

I get this error: **Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [50, 1000] instead**

My input shape is : torch.Size([50, 3, 299, 299])

This is the code for my model,

```
class CNN1(nn.Module):
def __init__(self, pretrained):
super(CNN1, self).__init__()
if pretrained is True:
self.model = models.inception_v3(pretrained=True)
modules = list(self.model.children())[:-1] # delete the last fc layer.
self.features = nn.Sequential(*modules)
self.fc0 = nn.Linear(2048, 10) #digit 0
self.fc1 = nn.Linear(2048, 10) #digit 1
self.fc2 = nn.Linear(2048, 10) #digit 2
self.fc3 = nn.Linear(2048, 10) #digit 3
def forward(self, x):
bs, _, _, _ = x.shape
x = self.features(x)
x = F.adaptive_avg_pool2d(x, 1).reshape(bs, -1)
label0 = self.fc0(x)
label1 = self.fc1(x)
label2= self.fc2(x)
label3= self.fc3(x)
return {'label0': label0, 'label1': label1,'label2':label2, 'label3': label3}
```

and this is a piece of iteration:

```
for batch_idx, sample_batched in enumerate(train_dataloader):
# importing data and moving to GPU
image,label0, label1, label2, label3 = sample_batched['image'].to(device),\
sample_batched['label0'].to(device),\
sample_batched['label1'].to(device),\
sample_batched['label2'].to(device) ,\
sample_batched['label3'].to(device)
# zero the parameter gradients
optimizer.zero_grad()
output=model(image.float())
```

anyone have a suggestion?