Why modify the model parameter gives me different result?

I’m trying to write some code that need to modify the model parameter.
I understand that model.state_dict() returns a dictonary, if I modify that dictionary, the corrsponding model parameter also changes. But after I rewrite the code in a modularization way, such link disappearded.
To show my problems more clearly, I write following example:

import torch
from torchvision.models import resnet18

class myclass(object):
    def __init__(self, model):
        self.model = model
        self.parameter = self.model.state_dict()
        
        
model = resnet18()
parameter = model.state_dict()
print(parameter['fc.weight']) # all of them is not 0
parameter['fc.weight'] = torch.zeros_like(parameter['fc.weight'])
parameter_new = model.state_dict()
print(parameter['fc.weight']) # all of them is 0

model2 = resnet18()
my_object = myclass(model2)
print(my_object.parameter['fc.weight']) # all of them is not 0
my_object.parameter['fc.weight'] = torch.zeros_like(my_object.parameter['fc.weight'])
my_object_parameter_new = my_object.model.state_dict()
print(my_object_parameter_new['fc.weight']) # all of them is not 0

The problem is why my_object_parameter_new[‘fc.weight’] is not zero after I manually set my_object.parameter[‘fc.weight’] to 0?

Thanks ! :grinning: