# Non-text sequence classification with nn.transformer

This seems trivial given the otherwise clear tutorial, but I cannot figure out how to adjust the sequence-to-sequence modeling code (at Sequence-to-Sequence Modeling with nn.Transformer and TorchText — PyTorch Tutorials 1.7.1 documentation) to

1. perform sequence classification instead, and
2. to do just that for not text but multi-feature sequences.

For instance, where should one make adjustments to output class predictions? And how to determine the “vocabulary” size of a series of non-text sequences? Each sequence is 5 features by 200 timepoints large, and has one label. Any pointers would be much appreciated.

1 Like