Which activation function should I use in the last layer output while using Resnet50?

Hi, I hope you are great.
I have been creating my own cnn model with pytorch. But I am not sure which activation function should I use in resnet50 for the last layer (output) and what is the difference between them. Do you have any idea about that?

What is your task? Generally for image classification its softmax.

1 Like

Keep in mind that commonly used loss functions for classification tasks require logits, i.e. no final activation function, if nn.CrossEntropyLoss is used, or log-probabilities, i.e. F.log_softmax as the final activation function, if nn.NLLLoss is used.

2 Likes

I got glass images. I want to predict which glass image is good (I mean quality) which image is not good (1 or 0).

Your problem statement sounds like its a binary classification. Usually Sigmoid is preferred.
While inference any value greater than equal to 0.5 is considered as 1 and lesser than 0.5 considered as 0.

For loss you can use Binary Cross Entropy
loss = F.binary_cross_entropy(torch.sigmoid(input), target)

1 Like