Wav2Letter pretrained weights

Hello! I just read the Wav2Letter paper (https://arxiv.org/abs/1609.03193), and would like to try reproducing it in pytorch.

I noticed that there’s a model available in torchaudio: https://pytorch.org/audio/models.html#wav2letter

But it doesn’t seem to be pretrained. My attempts to run it appear to produce nonsense output: https://colab.research.google.com/drive/1KpceNl5eT08kIpiX-TLjD5PkKZR7pK7u?usp=sharing

Is there a way to use the weights that the paper used or some other set of pretrained weights? (Apologies if this is a silly question — I’m new to pytorch and ML frameworks, so I’m not sure if I’m missing some standard way to load pretrained weights)

Thanks!