How can I implement operations on queue with supported MIL ops?

I am trying to implement something like this:

input_list = [torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8)]
diff_tensor = [nn.KLDivLoss()(torch.tensor(input_list[-1]).log(), torch.tensor(input_list[i])).item() for i in range(5)]

But obviously, the above implementation won’t be converted to coreML. So I tried to modify it to this:

input_list = [torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8), torch.rand(1, 8)]
input_tensor = torch.concat(input_list)
cal_matrix = torch.tensor([[-1, 0, 0, 0, 1],
[0, -1, 0, 0, 1],
[0, 0, -1, 0, 1],
[0, 0, 0, -1, 1],
[0, 0, 0, 0, 0]])
diff_tensor = torch.matmul(cal_matrix.type(torch.FloatTensor), input_tensor)

The new version can calculate the diff between adjacent elements as subtraction. However, the original algorithm requires a KLDiv operation, instead of a simple subtraction.

Could someone provide some guidance how I can modify to make the new version work exactly the same as the old version while making it convertible to coreML?