Update the Tensor by element's indice, in forward function

I tried to define a customized layer for transforming scalar inputs into 2d-matrix, while the transformation rules are based on the elements’ indices.

Following is the code:

class CustomLayer(nn.Module):
    def __init__(self)
    def forward(thresholds):
        # @threshold: torch.uint8
        #             <batch_size, 2>.
        #             Thresholds  of x and y directions.

        canvas = torch.zeros((batch_size, height, width), dtype=torch.uint8)

        for i in range(64):
            for j in range(64):
                a = torch.relu(i-thresholds[:,0])
                b = torch.relu(i-thresholds[:,1])
                canvas[:,i,j] = a+b

        return canvas

I’m wondering if there’s any way to get the indices of the element in the canvas Tensor, so that I could do things like:

# indice() does not really exists.
       lambda e: 
           torch.relu(e.indice()[0]- thresholds[0]) + \
           torch.relu(e.indice()[1]- thresholds[1]))

Is there any way to do so?
(btw., can the for loop in forward function be handled well in the back propagation phase?)



The for-loop can be handled, but if the input is an integral type, then you won’t be able to get gradients for it (as gradients are only for continuous inputs).