Hello I am creating custom loss function, on of the component is modifying the values of the data array.
so for example function f mutates A into B both A and B are the same shape and data type
f(A)=B
in order to keep backpropagation working I need to supply new array as the output so for example
B= zeros()
f(A,B)
return B
so I will not mutate A
or mutation is allowed?
I assume “mutation” refers to inplace operations, which are directly manipulating the data of the tensor, e.g. as seen here:
a = torch.zeros(1)
print(a)
# tensor([0.])
b = a
print(b)
# tensor([0.])
b.add_(1.)
print(a)
# tensor([1.])
print(b)
# tensor([1.])
If so, then it would depend on the used operations and if these values are needed for the gradient calculation. PyTorch will raise a RuntimeError
is disallowed inplace operations are detected and will fail during the backward
pass.
ok, thanks @ptrblck ! If it will just give error I can just experiment on this - fantastic
One more thing as this mutation function acts element wise on 3D array - is there supported way to apply the function to each element of the tensor - with access to its cartesian coordinate in the array
pseudo code below
arr= torch.rand(100,100,100)
def add_neighbour(arr,point):
return arr[point.x+1,point.y,point.z]+arr[point.x,point.y+1,point.z]+arr[point.x,point.y,point.z+1]
apply_on_coord( add_neighbour ,arr)
Your indexing approach of:
x[index] = x[index] + b
would also manipulate x
inplace and will also raise an error if it’s disallowed.