Source codes I attached below is a very easy version of my original model.
There are two conditions. First, I need learnable parameters to finetune my weight values and I have to use ‘torch.where()’ function, In this code, I actually used very simple condition x > 0 but the actual condition is more complex than I wrote.
before ---------------------
w tensor([[[[0.3798]]]])
myParam tensor([0.9831])
after ---------------------
w tensor([[[[0.3724]]]])
myParam tensor([0.9831])
and this is main issue. weight values were updated by loss.backward() and optimizer.grad() but myParam wasn’t even when I run my code over 5 times.
Is there any way to learn my parameter?
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.w = nn.Parameter(torch.randn(1,1,1,1))
self.myParam = nn.Parameter(torch.rand(1))
def forward(self,x):
self.w.data = torch.where(self.w > 0, (self.w * self.myParam), self.w).data
# if I use this line
# self.w = torch.where(self.w > 0, (self.w * self.myParam), self.w)
# TypeError: cannot assign 'torch.FloatTensor' as parameter 'w' (torch.nn.Parameter or None expected)
return F.conv2d(input,self.w)
net = Net()
print("before ---------------------")
for name,param in net.named_parameters():
print(name," ",param.data)
input = torch.randn(1,1,2,2)
target = torch.ones(1,1,2,2)
loss_fn = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.Adam(net.parameters())
output = net(input)
loss = loss_fn(output,target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("after ---------------------")
for name,param in net.named_parameters():
print(name," ",param.data)