I have a few questions I would like to ask. Please feel free to answer partially as I would like to hear your feedback/answer.
- Is there a good baseline code written in PyTorch for emotion recognition from still images say for FER2013 dataset or another dataset that has a discrete set of emotion classes?
- Should I detect the emotion based on Facial Action Units or should I detect emotion based on Facial Landmarks? Which one would be a better indicator?
- If I find a good baseline code that detects emotion from still images, how can I use transfer learning to detect emotions on a dataset that consists of video segments of 10-30 seconds? Basically, now I want to detect the emotion on a different modality: video.
- Are you aware of a good dataset that detects the facial expression/emotion based on videos for deep learning application? For example, for each 10-30 second video segment, it has a emotion label from 6-8 classes of emotion.
- Is there a good baseline code you are aware of that can detect emotion from video segments? (assuming there is a dataset as in 4).
Additionally, please feel free to link me with papers for 3 and 5.