Efficiency repeat_interleave

Hi guys.

I’ve seen that the new function repeat_interleave has been released recently https://pytorch.org/docs/stable/torch.html?highlight=repeat_interleave#torch.repeat_interleave .

However, it is unclear how this function works internally:

  • is it more similar to torch.expand or torch.repeat ?
  • is it preferable to copy data from CPU to GPU instead of using repeat_interleave on GPU ?

Thank you for your help !