Training an encoder-decoder (sentence generator) using a pre-trained sentence classifier

I have a dataset of sentences which belong to 2 categories. I have trained a classifier on this data to classify a given sentence into one of the 2 categories. Next, I want to train a generator (encoder-decoder) model to convert a given sentence from class-1 to class-2 using the pre-trained classifier. So basically it’s a style transfer in NLP. Here is a skeleton of my models:

Encoder:
Embedding layer --> LSTM [outputs= o, h]

Decoder:
Embedding layer --> LSTM --> Linear --> Relu --> log_softmax [output= probability for each word in vocab]

Classifier:
Encoder --> Linear layer1 --> Linear2 --> sigmoid [output = class probability]

Generator:
Encoder --> Decoder --> topK(1) [outputs = token for each word in generated sentence as floats]

What I am trying to do is, train the generator using error signals from pre-trained Classifier model. Whether this model would work or not is a separate question (and I would definitely love to hear feedbacks from more experienced members here). But, the major concern here is that Generator returns an array of sentence tokens (words) as floats which should then be passed to a (freezed) Classifier model which contains Embedding layer as first layer, which only accepts Long datatype. But converting float to long tensor would destroy the gradient history. From what I understand from similar embedding related questions, it’s not possible to retain gradients with type conversion. So what are my options here? Any workarounds? For example, astute readers would have noticed that even “topK/argmax” operation would break the gradients history, for which I am planning to use a Linear layer to find argmax while training. Any similar solutions for “embedding” problem too? I am quite sure people would have tried similar things but I can’t find any resources on seq2seq + classifier.

Note: I am not posting code to keep the post clean for better understanding. If required I can provide related sections.