Welcome to Pytorch Discussion Forum. I answered your question at Stackoverflow as well.
As per the discussion here, update your code to include torch.nn.Parameter()
, which basically makes the weight recognizable as a parameter in optimizer.
import torch
from torch import nn
conv = nn.Conv1d(1,1,kernel_size=2)
K = torch.tensor([[[0.5, 0.5]]]) #use one dimensional as per your conv layer
conv.weight = nn.Parameter(K) #use nn.parameters
for Bigger and complex models you can see this toy example which uses pytorch state_dict()
method.
import torch
import torch.nn as nn
import torchvision
net = torchvision.models.resnet18(pretrained=True)
pretrained_dict = net.state_dict()
conv_weights = pretrained_dict['conv1.weight'] #64,3,7,7
new = torch.tensor((), dtype=torch.int32)
new = new.new_ones(conv_weights.shape) #assigning all ones
pretrained_dict['conv1.weight'] = new
net.load_state_dict(pretrained_dict)
param = list(net.parameters())
print(param[0])