How to create/train a binary classification model for checking candidate phrases

Let’s say I have sentences like “He called me a silly sausage when I made a stupid joke”, and I want to identify/extract all swear words and the like (here: “silly sausage”).

I could, of course, set this up as a sequence labeling task. However, can this also be done as a binary classification task where the model gets the sentence as well as a candidate phrase, and the model predicts whether this candidate phrase is a swear word or not?

I know how to do this with traditional models (e.g., Naive Bayes, SVM, Logistic Regression):

  • Extract word/phrase features for the candidate given the sentence
  • Train a model on the predicted features, or use extracted features for the prediction.

But how can I do this in an end-to-end fashion using a CNN or RNN model, for example?

hello,
you can extract features from sentence with cnn (by using embedding and conv1d) and then use attention to extract weather the given phrase is swear or not. (you need to use the phrase as your query and the given sentence as the key).

1 Like

Ah, right…I think I get the idea. Thanks!

Say I would encode the sentence and the candidate using an CNN or RNN, would I need to use separate models, or could I use the same, say, RNN to encode both the sentence and candidate?

you can use something like bert to extract features for both of them.
you also can use different networks for feature extraction.
in my idea, try a pre-trained network like bert for feature extraction, and train the attention based on that.