Where to look in codebase for methods that change values of tensor?

Hi everyone this is my first post. I would like to contribute to an issue on github’s pytorch codebase. I am having a hard time finding where in the codebase functions are that change values inside of a tensor. For example if you have:

x = torch.tensor([[1,2],[3,4]])
and then you
x[0,1] = 1
What is happening under the hood or what functions are being called to allow index [0,1] = 1
I want to do some error checking in this scenario to throw an error if trying to write to a read only memory if tensor was created from a numpy array with mmap =‘r’ . I have an idea of how i want to error check, but I am having a hard time finding where to add my code.

I hope posting this in the c++ category was okay, because I am not sure where the correct category would be.
Cheers

Ok i found x[0,1] =1 in the codebase.
The function being called is THPVariable_setitem in torch/csrc/autograd/python_variable_indexing.cpp
Posting this hear in case this could help anyone else in the future.
Now I have an idea of where to look for other functions that are setting values in memory.

That is the Python level, but the right level to solve this at is likely in ATen / the PyTorch dispatcher.
The functions setting values will be the ones ending in _ or _out. You can also look at native_functions.yaml and the alias informaiton there.

Best regards

Thomas

1 Like

That is really helpful, Thanks Thomas.

BTW, personally, I ccould imagine that an inplace hook that can throw errors might be neat, but I have no idea if that would cost too much overhead to be considered. This way one could either modify updates on the fly (e.g. for computed parameters) or prohibit them by raising an exception.

Thanks for the suggestion @tom. What were you thinking in regards to hooks? Maybe like a signal handler that catches a sig error from the os? I am a second year computer science student, so I am still learning, and not totally experienced with hooks. A couple other students and I are working on this as a team. Also, any suggestions how we could determine if the Tensor is in read only memory? If you are dealing with a PyObject you can cast to PyArrayObject and check with PyArray_IsWriteable, but I don’t think that will work with Tensor object.