First, try using the inplace-modification-error debugging suggestions in the
following post:
Note that optimizer.step() counts as an inplace modification for the parameters
being optimized. Parts of your post have the flavor of optimizer.step() being
the cause of the inplace modification.
This error message is telling you that the tensor that was modified inplace has
shape [200, 2], and it appears to be a Linear. Does GaussianMLP.block
contain a Linear (200, 2), perhaps the last Linear in the Sequential?
(Note, autograd raises the error on the first inplace modification it detects, so,
if this is what’s going on, earlier Linears in block probably also have been
modified inplace.)
Anyway, given the error message you posted, start by locating any tensors of
shape [200, 2] you have (call them generically t) and print out t._version
at various places in your code to use a divide-and-conquer strategy (binary
search) to locate where t._version changes from 1400 to 1401. This is
where the relevant inplace modification occurs.
If optimizer.step() is the cause, then ask yourself if you are ever calling .backward() on a quantity that depends on some “old” data, that is, that
depends (in the sense of a still-live autograd computation graph) on some
data that was computed using some model Parameter (perhaps that Linear) prior to the most recent optimizer.step() call.
(As an aside, .data has been deprecated for public-facing use. Using it can
confuse autograd and cause errors. I didn’t notice any .data calls in your
code that looked like they would be causing trouble, but you should rewrite
your code to eliminate them anyway, if only because the semantics of .data
could silently change in future versions.)