OpenNMT beam.py beam search class for seq2seq language models

I am trying to reuse code from the slightly outdated openNMT library for beam search in a custom language model (OpenNMT-py/onmt/translate/beam.py at 49185121e46c4f65d68101590bad231a8dd73e4f · OpenNMT/OpenNMT-py · GitHub). So basically, what the code does in the beam class (line 125 -140) is as follows: For each beam, there is a probability distribution over n words. With beam size 4, this would mean we have 4 distributions. These are flattened into a tensor of size [4n]. Then, topk(4) selects the 4 most probable items and their ids. For reconstructing from which beam the most probable items come from, the ids are divided by the size of the vocabulary n. When I run the code, this produces float values, which then can’t be used as indeces, such that I get an exception. I googled a bit and it seems that openNMT was implemented when pytorch still had a different division semantics, namely flooring, such that division of an id by n results in a rounded integer. This behaviour would make sense here. Given n is 100 and the topk ids are [50, 90, 130, 230], then divided by 100, the floored values would be [0,0,1,2], which would be the indices of the respective beams the best probs come from. Since I use the latest torch version, and according to (Integer division behavior is different from Python and NumPy · Issue #5411 · pytorch/pytorch · GitHub) this issue, the behaviour has changed in order to conform with python and numpy, I assume I only have to add flooring to the division in order to fix that error, right?