Are there Pytorch operations that could reverse the effect of repeat_interleave
in the following statement:
y = x.repeat_interleave(x)
For example, if y = tensor([1, 2, 2, 2, 2, 3, 3, 3, 1]), could we recover x = tensor([1, 2, 2, 3, 1])?
Are there Pytorch operations that could reverse the effect of repeat_interleave
in the following statement:
y = x.repeat_interleave(x)
For example, if y = tensor([1, 2, 2, 2, 2, 3, 3, 3, 1]), could we recover x = tensor([1, 2, 2, 3, 1])?
I think unique_consecutive
can be used as seen here:
x = torch.tensor([1, 2, 2, 3, 1])
y = x.repeat_interleave(x)
u, c = y.unique_consecutive(return_counts=True)
repeat = c / u
out = u.repeat_interleave(repeat.long())
print(out)
# tensor([1, 2, 2, 3, 1])
unique_consecutive
works nicely here. Thank you!