Pipeline Parallelism training

I have tried to train a transformer using pipeline parallelism on one gpu. It used to work before the update to pytorch 2.0
I’m unsure about why there is a loop back in installing the older version where it reinstalls pytorch 2.0 when installing torch data to a previous version

I cannot reproduce the issue using torch==2.0.0 and torchtext==0.15.1 and also see an error in the import torch line of code as you are using !import torch which is also raising an error:

/bin/bash: import: command not found