generally, the dim of convolution output is multiple, but how sigmoid (or any other activition function) output one value?

for example, for a given last convolution output 1x1x2048, the output of sigmoid should be 1x1x2048, how does the output change to be one dim value (class number or convolution output )?

sorry for so stupid question, but i am just a little confused. thanks!

Hi,

Not sure to understand your question here. The `sigmoid`

function is an element-wise function, so it will not change the shape of the tensor, just replace each entry with 1/(1+exp(-entry)).

so if the sigmoid output of the given convolution is 1x1x2048, how to get the final catalogue value (for classification problem)?

you should use fc after the convolution.the input_size of the fc is 1*1*2048,and the output_size is you class number.and put the output into the sigmoid function,and get the final result

Usually, there is a fully connected layer after the last conv layer which maps the output to the number of categories. You are talking about sigmoid function so I assume there are only 2 classes and only 1 output value is needed. In this case, the code should be something like:

```
conv_out = torch.ones((1,1,2048))
# map dim 2048 to 1 using a linear transformation.
fc = nn.Linear(2048, 1)
fc_out = fc(conv_out)
# apply sigmoid function to fc_out to get the probability.
y_prob = torch.sigmoid(fc_out)
print(y_prob)
```

thank you. yes, there are only 2 classes. in fact, I just need the probability.

you mean I should map dim 2048 to 1 first?

is it ok if not use a linear transformation but use a additional conv layer?

for example,

nn.Conv2d(opt.ndf * 8, 1, 4, 1, 0, bias=False), #opt.ndf * 8=2048

nn.Sigmoid()

is it ok?

sorry for not clarify it.

there are 2 classes. and how to get the final probability using sigmoid for a given 1x1x2048 conv output?

hope I have made it clear .

Yes, you should map dim from 2048 to 1. I would recommend using a linear transformation as the last layer but you can try using another conv layer to see which approach gives you better performance.

Ok,

Then yes as the others stated, a linear layer to a single output node is what you want.