Hello,

The `masked_select`

function allows me to extract elements from a tensor. Given a tensor `A`

and a boolean mask `mask`

, `masked_select`

will give me the indexed elements as a 1D-vector (call it `b`

), such that

b = torch.masked_select(A, mask)

My question is if there is an efficient way would be to achieve the inverse operation? This is, given both `b`

and `mask`

, how can I recover `A`

or rather a version of `A`

where the indexed elements are in the right position and all-non indexed elements of `A`

can be some constant, e.g. zero.

My use case is that `A`

is about 15k x 15k dimensions and `b`

holds about `200k elements. So I need something fast that should also work with auto-grad. When I am implementing a simple for loop, it takes ages to propagate back through the graph.