Hello. I want to implement a custom loss function. However, I need to use output of a network as “index” of a different tensor to compute loss. For that, I use output.data.numpy() to get index value as integer.(since my original code is complicated, I wrote simplified version below).
I get an error “in-place operations can be only used on variables that don’t share storage with any other variables, but detected that there are 2 objects sharing it”
Is there any other way to use output of network as index of another tensor?
input = Variable(TORCH_TENSOR)
index=output[i][j].data.numpy() # i , j : some int values
loss = abs(ground_truth[p] - final_output[index]) # p: some int values
I am currently doing a tricky thing. It’s a regression problem of xyz coordinates of an object. After inference of xyz coordinates, I want to convert them voxel style. For that, I want to use those regressed values as index of a matrix. To sum up, I want to know a differentiable way to convert coordinates into voxel.
It depends what you do after, maybe if you can do it in a smooth way, but that doesn’t seem to be differentiable.
For example, if the output of your neural net is (0.5, 1.2, 1.6) and you convert this to (0, 1, 2), then this function is clearly not differentiable.
Everything depends on what you do with the matrix object after.
If you want to check that M(i, j, k) is of the correct value, for example, and you know that your matrix is smooth / almost continuous (your values are very similar in a local area) you can approximate a gradient, by writing a new autograd function that will take your three neural net outputs (x, y, z) and return the matrix output (forward).
You then have to write the backward function as well, which would basically return the local change of value of M for each direction (x, y, z).