Set and unset trainability

I don’t know if I’m doing everything how it’s supposed to but I have a very weird issue in the following code example:

import torch
from torch import nn
from torch.nn import functional
from torch.utils.data import DataLoader, random_split
from torchvision import datasets, models, transforms

from torchvision.models import efficientnet_v2_m, EfficientNet_V2_M_Weights
from torchinfo import summary

weights = EfficientNet_V2_M_Weights.DEFAULT
backbone = efficientnet_v2_m(weights=weights)

for params in backbone.features[-1].parameters():
    print(params)
    params.requires_grad = False
    print(params)
    params.requires_grad = True
    print(params)

Output for one tensor:

...
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477],
       requires_grad=True)
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477])
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477])

It seems that requires_grad = False removes the field entirely. I’m not able to let the layer train again by setting params.requires_grad = True.

I could set this flag again by directly calling params.requires_grad_() however.

Is this the way it should work? Or whats going on here?

I’m not able to reproduce the issue in a current nightly release and see:

...
Parameter containing:
tensor([3.4256, 3.3162, 3.1029,  ..., 3.3091, 2.9864, 3.3672],
       requires_grad=True)
Parameter containing:
tensor([3.4256, 3.3162, 3.1029,  ..., 3.3091, 2.9864, 3.3672])
Parameter containing:
tensor([3.4256, 3.3162, 3.1029,  ..., 3.3091, 2.9864, 3.3672],
       requires_grad=True)
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477],
       requires_grad=True)
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477])
Parameter containing:
tensor([-1.6943, -1.6580, -1.4941,  ..., -1.5908, -1.5684, -1.7477],
       requires_grad=True)

Thanks for looking into it! Unfortunately I too can’t reproduce it today. Somehow the issue resolved itself…