One of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 2]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead

Hi Varun!

This error message is telling you (from the tensor shape) that
block3[3].weight is being modified inplace.

optimizer3.step() modifies inplace the Parameters it is optimizing.

What’s going on is that you modify block3. But loss depends on l2,
so when you call loss.backward(), you backpropagate through block3
again, hence the error.

(You have other similar errors, but when this first error is detected, the
call to .backward() exits.)

I don’t understand the rationale behind what you are doing, but a fix might
be as simple as first calling all of your .zero_grad()s and .backward()s
and then calling all of your optimizer.step()s.

Please also take a look at this post that explains how to debug such
inplace-modification errors:

Best.

K. Frank