Hello, I completed the Seq2Seq tutorial on the Pytorch website and I was trying a similar implementation for English to Hindi translation. I also started using Google Colab to see if I could train my models faster on their hardware. When I execute my code, this is the error I get-
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-106-b033fca21b98> in <module>()
2
3 print(f'Executing training on: {device}')
----> 4 encoder = EncoderRNN(english_lang.n_words, HIDDEN_DIM).to(device)
5 decoder = DecoderRNN(HIDDEN_DIM, hindi_lang.n_words).to(device)
6
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
379 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
380
--> 381 return self._apply(convert)
382
383 def register_backward_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
185 def _apply(self, fn):
186 for module in self.children():
--> 187 module._apply(fn)
188
189 for param in self._parameters.values():
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
191 # Tensors stored in modules are graph leaves, and we don't
192 # want to create copy nodes, so we have to unpack the data.
--> 193 param.data = fn(param.data)
194 if param._grad is not None:
195 param._grad.data = fn(param._grad.data)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in convert(t)
377
378 def convert(t):
--> 379 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
380
381 return self._apply(convert)
RuntimeError: CUDA error: device-side assert triggered
Here is the link to the notebook- https://colab.research.google.com/drive/1iVg9-XxFlA-3aXBEReeSbZD9gF_7jsys
What could be the issue? The script is running without any issue on my laptop.