How to use index_copy_ while ignoring -1?

I have a code snippet to use index copy:

import torch
torch.set_default_device('cpu')
data = torch.zeros((5, 5))
part = torch.tensor([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]], dtype=torch.float32)
index = torch.tensor([-1, 3])
data.index_copy_(0, index, part)
print(data)

And I want -1 to be ignored, rather than throwing an error. Is it possible?

I can use mask = index != -1, and use data.index_copy_(0, index[mask], part[mask]) , but part[mask] has variable length, and does not work for cudagraph.

You might need to expand the tensor copying all “unused” values into this additional index and then slice the output before returning it.

Can you give a concrete example?

data = torch.zeros((5+1, 5)) # add "ignore" row at the end
ignore_index = data.size(0) - 1

part = torch.tensor([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5]], dtype=torch.float32)
index = torch.tensor([ignore_index, 3])
data.index_copy_(0, index, part)
print(data)
# tensor([[0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.],
#         [1., 2., 3., 4., 5.],
#         [0., 0., 0., 0., 0.],
#         [1., 2., 3., 4., 5.]])

data = data[:-1] # remove last row
print(data)
# tensor([[0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.],
#         [0., 0., 0., 0., 0.],
#         [1., 2., 3., 4., 5.],
#         [0., 0., 0., 0., 0.]])

Got it. Thanks! That’s one possible solution, although I have to reserve one slot for that padding.