Im sorry to coming back to this question
Here is what im trying to do in a very simple example:
I have this simple model:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 3, 1, 1)
self.fc1 = nn.Linear(6*4*4, 2)
def forward(self, x):
x = F.relu(self.conv1_v2(x))
x = self.fc1(x)
return x
model = Net()
print(model)
if i print it i will have
Net(
(conv1): Conv2d (1, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=96, out_features=2)
)
I want to make it to be
Net(
(conv1_V2): Conv2d (1, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=96, out_features=2)
)
and i want to do it before i start training it, so later when i save it i will no longer have conv1 and instead, i will have conv1_V2.
and after saving and reloading this model, this conv1_V2 is going to be use in another network later. is it a way to do it?
so far i was using the zombie way to do it but i notice when i do that the conv1_V2 will not show up in the model.named_parameters()


P.S. by conv1_V2 not showing up in model.named_parameters() i mean:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 3, 1, 1)
self.fc1 = nn.Linear(6*4*4, 2)
self.conv1_v2 = self.conv1
def forward(self, x):
x = F.relu(self.conv1_v2(x))
x = self.fc1(x)
return x
model = Net()
print(model)
params=[]
for key, value in dict(model.named_parameters()).items():
if value.requires_grad:
print(key)
Net(
(conv1): Conv2d (1, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=96, out_features=2)
(conv1_v2): Conv2d (1, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
conv1.weight
conv1.bias
fc1.weight
fc1.bias