.eval() does not freeze your layers. It puts your layers in “evaluation mode” (in contrast to training mode) which should only be done if you are testing/validating your model. To freeze layers while training, you should set requires_grad of layers 4,5,6 to false.
To freeze parts of your model, simply apply .requires_grad_(False) to the parameters that you don’t want updated. And as described above, since computations that use these parameters as inputs would not be recorded in the forward pass, they won’t have their .grad fields updated in the backward pass because they won’t be part of the backward graph in the first place, as desired.
Because this is such a common pattern, requires_grad can also be set at the module level with nn.Module.requires_grad_(). When applied to a module, .requires_grad_() takes effect on all of the module’s parameters (which have requires_grad=True by default).