This is a multi class supervised classification problem.

I’m using BCELoss() loss function with Sigmoid on the last layer.

**Question:**

Why do I get an “one-hot” vectors on inference time (When I load weights), instead of probabilities between 0-1 for every entry in every column vector?

**Model:**

```
# Our input x in shape (1, 257, 257)
def __init__(self):
super(SimpleCNN, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2),
nn.ReLU(), # (16, 128, 128)
nn.Conv2d(16, 32, kernel_size=2, stride=2),
nn.ReLU(), # (32, 64, 64)
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU(), # (32, 32, 32)
nn.Conv2d(32, 64, kernel_size=2, stride=2),
nn.ReLU(), # (64, 16, 16)
nn.Conv2d(64, 128, kernel_size=2, stride=2),
nn.ReLU(), # (128, 8, 8)
nn.MaxPool2d(kernel_size=2, stride=2),
nn.ReLU() # (128, 4, 4)
)
self.classifier = nn.Sequential(
nn.Linear(128*4*4, 120),
nn.ReLU(), # (16, 120)
nn.Linear(120,64),
nn.Sigmoid() # (16, 4)
)
def forward(self, x):
out = self.layer1(x)
# print(out.shape)
out = out.view(-1, 128*4*4)
# print(out.shape)
out = self.classifier(out)
# print(out.shape) # (32, 64)
# print(out.view(-1, QUANTA, args.num_params))
return out.view(-1, QUANTA, args.num_params) # (16, 4)
```

**How I load weights:**

model = SimpleCNN()

model.load_state_dict(torch.load(PATH))

model.eval()

**Predicted classes:**

[[[1. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 1. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 1. 0. 1.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 1. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]

[0. 0. 0. 0.]]]