Hello,

I have the following 3D CNN that I use to predict the value (a property we want to predict) from the 3D images.

I want to compute the loss as Mean Absolute error (L1 loss) of each batch.

My current settings are:

(batch size=1)

image size = (64,64,64)

the input tensor of images from train_loader to the network is shape of =[1,1,64,64,64]

[1 for batch size of 1, 1 greyscale input (1 channel), 64 z dim, 64 x dim, 64 y dim]

class CNN3D(nn.Module):

def **init**(self):

super(CNN3D,self).**init**()

self.conv1=nn.Conv3d(1,32,kernel_size=5,stride=1,padding=0,dilation=1)

self.bn1=nn.BatchNorm3d(32)

self.pool1=nn.AvgPool3d(2,stride=2)

self.conv2=nn.Conv3d(32,64,kernel_size=5,stride=1,padding=1,dilation=1)

self.bn2=nn.BatchNorm3d(64)

self.pool2=nn.AvgPool3d(kernel_size=2,stride=2,padding=0)

self.fc1=nn.Linear(7*7*64,500)

self.fc2=nn.Linear(500,50)

self.fc3=nn.Linear(50,1)

```
def forward(self,x):
x=self.pool1(F.relu(self.bn1(self.conv1(x))))
x=self.pool2(F.relu(self.bn2(self.conv2(x))))
x=x.view(-1,7*7*64)
x=F.relu(self.fc1(x))
x=F.relu(self.fc2(x))
x=self.fc3(x)
return x
```

The output of the network (predictions) is shape: torch.Size([56, 1])

The actual labels size are: torch.Size([1])

Shouldn’t be the size of the network output be a torch.Size([1)]? I don’t get why it outputs torch.Size([56, 1])?

My question is how do i fix the output of the network to be a something like (N,1) where N is the batch size. Because the size of the network predictions and the actual labels should have the same size!

Can somone help?