No padding when using torchtext.data.Field with a TabularDataset

I have been trying to implement the pytorch seq-to-seq transformer tutorial (https://pytorch.org/tutorials/beginner/transformer_tutorial.html). But I wanted to use my own data, and hence tried to load the csvs using the TabularDatasets while using the same Field that was defined in the tutorial.

While doing the batchify() step I encountered the error

ValueError: expected sequence of length 13 at dim 1 (got 6)

On looking deeper I found that the sequences were not getting padded nor was it being tokenized properly (no eos, pad or init tokens). I am a little confused as I specified the fixed length as well when defining the field but I keep getting the same error. It would be great if someone can point out where exactly I am going wrong or if I should use a different approach. The code is below:

# Defining the Field
TEXT = torchtext.data.Field(tokenize=get_tokenizer("basic_english"),
                            init_token='<sos>',
                            eos_token='<eos>',
                            pad_token='<pad>',
                            lower=True,
                            fix_length=20)

# create tuples representing the columns
fields = [
  (None, None), # ignore first column
  (None, None), # ignore id column
  ('Exp_text', TEXT),
]

# load the dataset from csvs
train_ds, val_ds = torchtext.data.TabularDataset.splits(
   path = './',
   train = 'lm_data_train.csv',
   validation = 'lm_data_val.csv',
   format = 'csv',
   fields = fields,
   skip_header = True
)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Create batches
def batchify(data, bsz):
    data = TEXT.numericalize([d.Exp_text for d in vars(data)['examples']])
    # Divide the dataset into bsz parts.
    nbatch = data.size(0) // bsz
    # Trim off any extra elements that wouldn't cleanly fit (remainders).
    data = data.narrow(0, 0, nbatch * bsz)
    # Evenly divide the data across the bsz batches.
    data = data.view(bsz, -1).t().contiguous()
    return data.to(device)

batch_size = 20
eval_batch_size = 10
train_data = batchify(train_ds, batch_size)
val_data = batchify(val_ds, eval_batch_size)