Hi Vaishnavi!
Soulitzer’s suggestion of using allow_mutation_on_saved_tensors():
may be appropriate for your use case.
But if you don’t need to modify tensors inplace (maybe you’re doing it
inadvertently somewhere), it may makes sense to fix the underlying issue.
To track things down, start with the information in your error message.
You have a tensor of shape [15, 1]
that is being modified inplace.
Where in your forward pass do you have such a tensor (or tensors)? Its
._version
property is changing from 0
to 2
. Try printing out t._version
at various place in your code to see where t._version
increases from 0
to 1
(one inplace modification) and then from 1
to 2
(a second inplace
modification).
Does your suspect tensor have ._version = 2
just before your call to
post_result[0].backward(retain_graph=True, create_graph=True)
?
(Note, if you find and fix any inplace-modification errors for a given tensor,
errors for additional tensors may show up – autograd aborts the backward
pass by raising the RuntimeError
you saw, so it only flags the first error
it encounters.)
You should use anomaly detection – it provides additional information that
can help you track down your error.
Some comments that may or may not be relevant to your error:
You do define model
in your __init__()
function, but, in the code as
posted, it is a local variable to __init__()
. You do not define self.model
.
Your __call__()
function uses self.model
, but it is not defined in your
code, as posted.
Assigning into a tensor by indexing the tensor is an inplace modification
and is sometimes the cause of inplace-modification errors. But it doesn’t
appear that you are using the result of post_process()
in your forward
pass.
The following post discusses how to debug inplace-modification errors
in some detail:
Good luck!
K. Frank