Sorry, I’m currently busy on the researches, so I can’t just jump into these open source, code review, pull requests, contribute, etc.
But I believe something like this will work:
If the sum is very close to 1 (instead of equal to 1), then assume that it’s the “proportion case” and do some workaround.
For example, calculate dataset_length*each_proportion and see if they sum up to dataset_length.
If not, then as long as the sum of all the proportion numbers is very close to 1, I think there should still be feasible workaround. It’s probably even already implemented.
For example:
import torch
from torch.utils.data import random_split
trainsets = random_split(range(11), [0.5, 0.5])
print(len(trainsets[0]))
print(len(trainsets[1]))
Gives 6 and 5.
I’m not sure how the complete mechanism had actually been, but I think the same mechanism can be applied as long as sum([0.05, … , 0.05]) is very close to 1, or whatever lengths list that sums very closely to 1. (Probably need a few additional checking and workaround when their sum is not equal to 1 though.)
And that GitHub issue haven’t get a bug label or any reply. I guess the devs just won’t notice it that soon.